Test Report: KVM_Linux_crio 18384

                    
                      818397ea37b8941bfdd3d988b855153c5c099b26:2024-03-14:33567
                    
                

Test fail (30/325)

Order failed test Duration
39 TestAddons/parallel/Ingress 154.38
53 TestAddons/StoppedEnableDisable 154.3
149 TestFunctional/parallel/ImageCommands/ImageLoadDaemon 11.03
172 TestMutliControlPlane/serial/StopSecondaryNode 150.48
174 TestMutliControlPlane/serial/RestartSecondaryNode 55.68
176 TestMutliControlPlane/serial/RestartClusterKeepsNodes 376.91
177 TestMutliControlPlane/serial/DeleteSecondaryNode 48.97
179 TestMutliControlPlane/serial/StopCluster 142.02
239 TestMultiNode/serial/RestartKeepsNodes 313.2
241 TestMultiNode/serial/StopMultiNode 141.5
248 TestPreload 270.92
256 TestKubernetesUpgrade 469.68
298 TestStartStop/group/old-k8s-version/serial/FirstStart 294.76
308 TestStartStop/group/no-preload/serial/Stop 139.23
311 TestStartStop/group/embed-certs/serial/Stop 139.2
314 TestStartStop/group/default-k8s-diff-port/serial/Stop 139.16
315 TestStartStop/group/old-k8s-version/serial/DeployApp 0.53
316 TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive 99.7
317 TestStartStop/group/no-preload/serial/EnableAddonAfterStop 12.41
319 TestStartStop/group/embed-certs/serial/EnableAddonAfterStop 12.42
323 TestStartStop/group/old-k8s-version/serial/SecondStart 750.57
324 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop 12.38
326 TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop 544.36
327 TestStartStop/group/no-preload/serial/UserAppExistsAfterStop 544.28
328 TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop 544.19
329 TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop 543.31
330 TestStartStop/group/embed-certs/serial/AddonExistsAfterStop 396.45
331 TestStartStop/group/no-preload/serial/AddonExistsAfterStop 286.49
332 TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop 424.76
333 TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop 109.94
x
+
TestAddons/parallel/Ingress (154.38s)

                                                
                                                
=== RUN   TestAddons/parallel/Ingress
=== PAUSE TestAddons/parallel/Ingress

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Ingress
addons_test.go:207: (dbg) Run:  kubectl --context addons-677681 wait --for=condition=ready --namespace=ingress-nginx pod --selector=app.kubernetes.io/component=controller --timeout=90s
addons_test.go:232: (dbg) Run:  kubectl --context addons-677681 replace --force -f testdata/nginx-ingress-v1.yaml
addons_test.go:245: (dbg) Run:  kubectl --context addons-677681 replace --force -f testdata/nginx-pod-svc.yaml
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: waiting 8m0s for pods matching "run=nginx" in namespace "default" ...
helpers_test.go:344: "nginx" [643944af-9748-4c53-a7ef-8d5ac13c429c] Pending / Ready:ContainersNotReady (containers with unready status: [nginx]) / ContainersReady:ContainersNotReady (containers with unready status: [nginx])
helpers_test.go:344: "nginx" [643944af-9748-4c53-a7ef-8d5ac13c429c] Running
addons_test.go:250: (dbg) TestAddons/parallel/Ingress: run=nginx healthy within 10.00423855s
addons_test.go:262: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'"
addons_test.go:262: (dbg) Non-zero exit: out/minikube-linux-amd64 -p addons-677681 ssh "curl -s http://127.0.0.1/ -H 'Host: nginx.example.com'": exit status 1 (2m12.025969348s)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 28

                                                
                                                
** /stderr **
addons_test.go:278: failed to get expected response from http://127.0.0.1/ within minikube: exit status 1
addons_test.go:286: (dbg) Run:  kubectl --context addons-677681 replace --force -f testdata/ingress-dns-example-v1.yaml
addons_test.go:291: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 ip
addons_test.go:297: (dbg) Run:  nslookup hello-john.test 192.168.39.215
addons_test.go:306: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 addons disable ingress-dns --alsologtostderr -v=1
addons_test.go:306: (dbg) Done: out/minikube-linux-amd64 -p addons-677681 addons disable ingress-dns --alsologtostderr -v=1: (1.18689972s)
addons_test.go:311: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 addons disable ingress --alsologtostderr -v=1
addons_test.go:311: (dbg) Done: out/minikube-linux-amd64 -p addons-677681 addons disable ingress --alsologtostderr -v=1: (7.927912854s)
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p addons-677681 -n addons-677681
helpers_test.go:244: <<< TestAddons/parallel/Ingress FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestAddons/parallel/Ingress]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p addons-677681 logs -n 25: (1.354423905s)
helpers_test.go:252: TestAddons/parallel/Ingress logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |                                            Args                                             |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| delete  | -p download-only-090466                                                                     | download-only-090466 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| delete  | -p download-only-516622                                                                     | download-only-516622 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| delete  | -p download-only-944405                                                                     | download-only-944405 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| delete  | -p download-only-090466                                                                     | download-only-090466 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| start   | --download-only -p                                                                          | binary-mirror-359112 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC |                     |
	|         | binary-mirror-359112                                                                        |                      |         |         |                     |                     |
	|         | --alsologtostderr                                                                           |                      |         |         |                     |                     |
	|         | --binary-mirror                                                                             |                      |         |         |                     |                     |
	|         | http://127.0.0.1:42713                                                                      |                      |         |         |                     |                     |
	|         | --driver=kvm2                                                                               |                      |         |         |                     |                     |
	|         | --container-runtime=crio                                                                    |                      |         |         |                     |                     |
	| delete  | -p binary-mirror-359112                                                                     | binary-mirror-359112 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| addons  | disable dashboard -p                                                                        | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC |                     |
	|         | addons-677681                                                                               |                      |         |         |                     |                     |
	| addons  | enable dashboard -p                                                                         | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC |                     |
	|         | addons-677681                                                                               |                      |         |         |                     |                     |
	| start   | -p addons-677681 --wait=true                                                                | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:07 UTC |
	|         | --memory=4000 --alsologtostderr                                                             |                      |         |         |                     |                     |
	|         | --addons=registry                                                                           |                      |         |         |                     |                     |
	|         | --addons=metrics-server                                                                     |                      |         |         |                     |                     |
	|         | --addons=volumesnapshots                                                                    |                      |         |         |                     |                     |
	|         | --addons=csi-hostpath-driver                                                                |                      |         |         |                     |                     |
	|         | --addons=gcp-auth                                                                           |                      |         |         |                     |                     |
	|         | --addons=cloud-spanner                                                                      |                      |         |         |                     |                     |
	|         | --addons=inspektor-gadget                                                                   |                      |         |         |                     |                     |
	|         | --addons=storage-provisioner-rancher                                                        |                      |         |         |                     |                     |
	|         | --addons=nvidia-device-plugin                                                               |                      |         |         |                     |                     |
	|         | --addons=yakd --driver=kvm2                                                                 |                      |         |         |                     |                     |
	|         |  --container-runtime=crio                                                                   |                      |         |         |                     |                     |
	|         | --addons=ingress                                                                            |                      |         |         |                     |                     |
	|         | --addons=ingress-dns                                                                        |                      |         |         |                     |                     |
	|         | --addons=helm-tiller                                                                        |                      |         |         |                     |                     |
	| addons  | addons-677681 addons                                                                        | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:07 UTC | 14 Mar 24 18:07 UTC |
	|         | disable metrics-server                                                                      |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-677681 addons disable                                                                | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:07 UTC | 14 Mar 24 18:07 UTC |
	|         | helm-tiller --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ip      | addons-677681 ip                                                                            | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:07 UTC | 14 Mar 24 18:07 UTC |
	| addons  | addons-677681 addons disable                                                                | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:07 UTC | 14 Mar 24 18:07 UTC |
	|         | registry --alsologtostderr                                                                  |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| ssh     | addons-677681 ssh cat                                                                       | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:07 UTC | 14 Mar 24 18:07 UTC |
	|         | /opt/local-path-provisioner/pvc-3ea07a54-31e2-48a3-89b2-871a7a1d26bf_default_test-pvc/file1 |                      |         |         |                     |                     |
	| addons  | addons-677681 addons disable                                                                | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:07 UTC | 14 Mar 24 18:08 UTC |
	|         | storage-provisioner-rancher                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | disable nvidia-device-plugin                                                                | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:07 UTC | 14 Mar 24 18:07 UTC |
	|         | -p addons-677681                                                                            |                      |         |         |                     |                     |
	| addons  | disable inspektor-gadget -p                                                                 | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:07 UTC | 14 Mar 24 18:07 UTC |
	|         | addons-677681                                                                               |                      |         |         |                     |                     |
	| addons  | disable cloud-spanner -p                                                                    | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:07 UTC | 14 Mar 24 18:07 UTC |
	|         | addons-677681                                                                               |                      |         |         |                     |                     |
	| addons  | enable headlamp                                                                             | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:07 UTC | 14 Mar 24 18:07 UTC |
	|         | -p addons-677681                                                                            |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ssh     | addons-677681 ssh curl -s                                                                   | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:07 UTC |                     |
	|         | http://127.0.0.1/ -H 'Host:                                                                 |                      |         |         |                     |                     |
	|         | nginx.example.com'                                                                          |                      |         |         |                     |                     |
	| addons  | addons-677681 addons                                                                        | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:08 UTC | 14 Mar 24 18:08 UTC |
	|         | disable csi-hostpath-driver                                                                 |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| addons  | addons-677681 addons                                                                        | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:08 UTC | 14 Mar 24 18:08 UTC |
	|         | disable volumesnapshots                                                                     |                      |         |         |                     |                     |
	|         | --alsologtostderr -v=1                                                                      |                      |         |         |                     |                     |
	| ip      | addons-677681 ip                                                                            | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:10 UTC | 14 Mar 24 18:10 UTC |
	| addons  | addons-677681 addons disable                                                                | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:10 UTC | 14 Mar 24 18:10 UTC |
	|         | ingress-dns --alsologtostderr                                                               |                      |         |         |                     |                     |
	|         | -v=1                                                                                        |                      |         |         |                     |                     |
	| addons  | addons-677681 addons disable                                                                | addons-677681        | jenkins | v1.32.0 | 14 Mar 24 18:10 UTC | 14 Mar 24 18:10 UTC |
	|         | ingress --alsologtostderr -v=1                                                              |                      |         |         |                     |                     |
	|---------|---------------------------------------------------------------------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:04:54
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:04:54.364712  952029 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:04:54.364976  952029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:04:54.364987  952029 out.go:304] Setting ErrFile to fd 2...
	I0314 18:04:54.364991  952029 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:04:54.365244  952029 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:04:54.365922  952029 out.go:298] Setting JSON to false
	I0314 18:04:54.366838  952029 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":92846,"bootTime":1710346648,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:04:54.366906  952029 start.go:139] virtualization: kvm guest
	I0314 18:04:54.368913  952029 out.go:177] * [addons-677681] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:04:54.370732  952029 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:04:54.370731  952029 notify.go:220] Checking for updates...
	I0314 18:04:54.372068  952029 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:04:54.373267  952029 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:04:54.374565  952029 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:04:54.375701  952029 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:04:54.376971  952029 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:04:54.378767  952029 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:04:54.411503  952029 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 18:04:54.412681  952029 start.go:297] selected driver: kvm2
	I0314 18:04:54.412694  952029 start.go:901] validating driver "kvm2" against <nil>
	I0314 18:04:54.412705  952029 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:04:54.413385  952029 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:04:54.413482  952029 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 18:04:54.428201  952029 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 18:04:54.428272  952029 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 18:04:54.428533  952029 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:04:54.428609  952029 cni.go:84] Creating CNI manager for ""
	I0314 18:04:54.428626  952029 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 18:04:54.428636  952029 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 18:04:54.428712  952029 start.go:340] cluster config:
	{Name:addons-677681 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:addons-677681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAg
entPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:04:54.428843  952029 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:04:54.430716  952029 out.go:177] * Starting "addons-677681" primary control-plane node in "addons-677681" cluster
	I0314 18:04:54.431947  952029 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:04:54.431983  952029 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 18:04:54.431992  952029 cache.go:56] Caching tarball of preloaded images
	I0314 18:04:54.432075  952029 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:04:54.432087  952029 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:04:54.432518  952029 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/config.json ...
	I0314 18:04:54.432545  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/config.json: {Name:mk427c4ac528b1888c171dc64b19ac1736e019d6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:04:54.432682  952029 start.go:360] acquireMachinesLock for addons-677681: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:04:54.432748  952029 start.go:364] duration metric: took 38.058µs to acquireMachinesLock for "addons-677681"
	I0314 18:04:54.432766  952029 start.go:93] Provisioning new machine with config: &{Name:addons-677681 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:addons-677681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:04:54.432816  952029 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 18:04:54.434467  952029 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=4000MB, Disk=20000MB) ...
	I0314 18:04:54.434599  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:04:54.434647  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:04:54.448497  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42189
	I0314 18:04:54.448906  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:04:54.449502  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:04:54.449523  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:04:54.449832  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:04:54.450017  952029 main.go:141] libmachine: (addons-677681) Calling .GetMachineName
	I0314 18:04:54.450179  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:04:54.450334  952029 start.go:159] libmachine.API.Create for "addons-677681" (driver="kvm2")
	I0314 18:04:54.450371  952029 client.go:168] LocalClient.Create starting
	I0314 18:04:54.450402  952029 main.go:141] libmachine: Creating CA: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 18:04:54.821294  952029 main.go:141] libmachine: Creating client certificate: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 18:04:54.931802  952029 main.go:141] libmachine: Running pre-create checks...
	I0314 18:04:54.931828  952029 main.go:141] libmachine: (addons-677681) Calling .PreCreateCheck
	I0314 18:04:54.932362  952029 main.go:141] libmachine: (addons-677681) Calling .GetConfigRaw
	I0314 18:04:54.932813  952029 main.go:141] libmachine: Creating machine...
	I0314 18:04:54.932828  952029 main.go:141] libmachine: (addons-677681) Calling .Create
	I0314 18:04:54.932947  952029 main.go:141] libmachine: (addons-677681) Creating KVM machine...
	I0314 18:04:54.934373  952029 main.go:141] libmachine: (addons-677681) DBG | found existing default KVM network
	I0314 18:04:54.935129  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:54.934988  952050 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012f990}
	I0314 18:04:54.935156  952029 main.go:141] libmachine: (addons-677681) DBG | created network xml: 
	I0314 18:04:54.935170  952029 main.go:141] libmachine: (addons-677681) DBG | <network>
	I0314 18:04:54.935214  952029 main.go:141] libmachine: (addons-677681) DBG |   <name>mk-addons-677681</name>
	I0314 18:04:54.935243  952029 main.go:141] libmachine: (addons-677681) DBG |   <dns enable='no'/>
	I0314 18:04:54.935255  952029 main.go:141] libmachine: (addons-677681) DBG |   
	I0314 18:04:54.935272  952029 main.go:141] libmachine: (addons-677681) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0314 18:04:54.935286  952029 main.go:141] libmachine: (addons-677681) DBG |     <dhcp>
	I0314 18:04:54.935296  952029 main.go:141] libmachine: (addons-677681) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0314 18:04:54.935312  952029 main.go:141] libmachine: (addons-677681) DBG |     </dhcp>
	I0314 18:04:54.935322  952029 main.go:141] libmachine: (addons-677681) DBG |   </ip>
	I0314 18:04:54.935335  952029 main.go:141] libmachine: (addons-677681) DBG |   
	I0314 18:04:54.935349  952029 main.go:141] libmachine: (addons-677681) DBG | </network>
	I0314 18:04:54.935362  952029 main.go:141] libmachine: (addons-677681) DBG | 
	I0314 18:04:54.940738  952029 main.go:141] libmachine: (addons-677681) DBG | trying to create private KVM network mk-addons-677681 192.168.39.0/24...
	I0314 18:04:55.007838  952029 main.go:141] libmachine: (addons-677681) DBG | private KVM network mk-addons-677681 192.168.39.0/24 created
	I0314 18:04:55.007898  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:55.007772  952050 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:04:55.007916  952029 main.go:141] libmachine: (addons-677681) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681 ...
	I0314 18:04:55.007945  952029 main.go:141] libmachine: (addons-677681) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 18:04:55.007965  952029 main.go:141] libmachine: (addons-677681) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:04:55.247496  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:55.247354  952050 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa...
	I0314 18:04:55.544854  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:55.544721  952050 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/addons-677681.rawdisk...
	I0314 18:04:55.544891  952029 main.go:141] libmachine: (addons-677681) DBG | Writing magic tar header
	I0314 18:04:55.544905  952029 main.go:141] libmachine: (addons-677681) DBG | Writing SSH key tar header
	I0314 18:04:55.544917  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:55.544865  952050 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681 ...
	I0314 18:04:55.545032  952029 main.go:141] libmachine: (addons-677681) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681
	I0314 18:04:55.545056  952029 main.go:141] libmachine: (addons-677681) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 18:04:55.545065  952029 main.go:141] libmachine: (addons-677681) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681 (perms=drwx------)
	I0314 18:04:55.545079  952029 main.go:141] libmachine: (addons-677681) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 18:04:55.545091  952029 main.go:141] libmachine: (addons-677681) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 18:04:55.545106  952029 main.go:141] libmachine: (addons-677681) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 18:04:55.545115  952029 main.go:141] libmachine: (addons-677681) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 18:04:55.545128  952029 main.go:141] libmachine: (addons-677681) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 18:04:55.545138  952029 main.go:141] libmachine: (addons-677681) Creating domain...
	I0314 18:04:55.545145  952029 main.go:141] libmachine: (addons-677681) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:04:55.545177  952029 main.go:141] libmachine: (addons-677681) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 18:04:55.545197  952029 main.go:141] libmachine: (addons-677681) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 18:04:55.545210  952029 main.go:141] libmachine: (addons-677681) DBG | Checking permissions on dir: /home/jenkins
	I0314 18:04:55.545225  952029 main.go:141] libmachine: (addons-677681) DBG | Checking permissions on dir: /home
	I0314 18:04:55.545236  952029 main.go:141] libmachine: (addons-677681) DBG | Skipping /home - not owner
	I0314 18:04:55.546424  952029 main.go:141] libmachine: (addons-677681) define libvirt domain using xml: 
	I0314 18:04:55.546452  952029 main.go:141] libmachine: (addons-677681) <domain type='kvm'>
	I0314 18:04:55.546463  952029 main.go:141] libmachine: (addons-677681)   <name>addons-677681</name>
	I0314 18:04:55.546474  952029 main.go:141] libmachine: (addons-677681)   <memory unit='MiB'>4000</memory>
	I0314 18:04:55.546487  952029 main.go:141] libmachine: (addons-677681)   <vcpu>2</vcpu>
	I0314 18:04:55.546496  952029 main.go:141] libmachine: (addons-677681)   <features>
	I0314 18:04:55.546508  952029 main.go:141] libmachine: (addons-677681)     <acpi/>
	I0314 18:04:55.546518  952029 main.go:141] libmachine: (addons-677681)     <apic/>
	I0314 18:04:55.546528  952029 main.go:141] libmachine: (addons-677681)     <pae/>
	I0314 18:04:55.546537  952029 main.go:141] libmachine: (addons-677681)     
	I0314 18:04:55.546548  952029 main.go:141] libmachine: (addons-677681)   </features>
	I0314 18:04:55.546574  952029 main.go:141] libmachine: (addons-677681)   <cpu mode='host-passthrough'>
	I0314 18:04:55.546586  952029 main.go:141] libmachine: (addons-677681)   
	I0314 18:04:55.546598  952029 main.go:141] libmachine: (addons-677681)   </cpu>
	I0314 18:04:55.546610  952029 main.go:141] libmachine: (addons-677681)   <os>
	I0314 18:04:55.546620  952029 main.go:141] libmachine: (addons-677681)     <type>hvm</type>
	I0314 18:04:55.546632  952029 main.go:141] libmachine: (addons-677681)     <boot dev='cdrom'/>
	I0314 18:04:55.546642  952029 main.go:141] libmachine: (addons-677681)     <boot dev='hd'/>
	I0314 18:04:55.546661  952029 main.go:141] libmachine: (addons-677681)     <bootmenu enable='no'/>
	I0314 18:04:55.546679  952029 main.go:141] libmachine: (addons-677681)   </os>
	I0314 18:04:55.546709  952029 main.go:141] libmachine: (addons-677681)   <devices>
	I0314 18:04:55.546741  952029 main.go:141] libmachine: (addons-677681)     <disk type='file' device='cdrom'>
	I0314 18:04:55.546760  952029 main.go:141] libmachine: (addons-677681)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/boot2docker.iso'/>
	I0314 18:04:55.546778  952029 main.go:141] libmachine: (addons-677681)       <target dev='hdc' bus='scsi'/>
	I0314 18:04:55.546791  952029 main.go:141] libmachine: (addons-677681)       <readonly/>
	I0314 18:04:55.546802  952029 main.go:141] libmachine: (addons-677681)     </disk>
	I0314 18:04:55.546815  952029 main.go:141] libmachine: (addons-677681)     <disk type='file' device='disk'>
	I0314 18:04:55.546828  952029 main.go:141] libmachine: (addons-677681)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 18:04:55.546843  952029 main.go:141] libmachine: (addons-677681)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/addons-677681.rawdisk'/>
	I0314 18:04:55.546858  952029 main.go:141] libmachine: (addons-677681)       <target dev='hda' bus='virtio'/>
	I0314 18:04:55.546867  952029 main.go:141] libmachine: (addons-677681)     </disk>
	I0314 18:04:55.546878  952029 main.go:141] libmachine: (addons-677681)     <interface type='network'>
	I0314 18:04:55.546892  952029 main.go:141] libmachine: (addons-677681)       <source network='mk-addons-677681'/>
	I0314 18:04:55.546903  952029 main.go:141] libmachine: (addons-677681)       <model type='virtio'/>
	I0314 18:04:55.546915  952029 main.go:141] libmachine: (addons-677681)     </interface>
	I0314 18:04:55.546926  952029 main.go:141] libmachine: (addons-677681)     <interface type='network'>
	I0314 18:04:55.546938  952029 main.go:141] libmachine: (addons-677681)       <source network='default'/>
	I0314 18:04:55.546948  952029 main.go:141] libmachine: (addons-677681)       <model type='virtio'/>
	I0314 18:04:55.546958  952029 main.go:141] libmachine: (addons-677681)     </interface>
	I0314 18:04:55.546966  952029 main.go:141] libmachine: (addons-677681)     <serial type='pty'>
	I0314 18:04:55.546985  952029 main.go:141] libmachine: (addons-677681)       <target port='0'/>
	I0314 18:04:55.547003  952029 main.go:141] libmachine: (addons-677681)     </serial>
	I0314 18:04:55.547018  952029 main.go:141] libmachine: (addons-677681)     <console type='pty'>
	I0314 18:04:55.547029  952029 main.go:141] libmachine: (addons-677681)       <target type='serial' port='0'/>
	I0314 18:04:55.547041  952029 main.go:141] libmachine: (addons-677681)     </console>
	I0314 18:04:55.547053  952029 main.go:141] libmachine: (addons-677681)     <rng model='virtio'>
	I0314 18:04:55.547067  952029 main.go:141] libmachine: (addons-677681)       <backend model='random'>/dev/random</backend>
	I0314 18:04:55.547076  952029 main.go:141] libmachine: (addons-677681)     </rng>
	I0314 18:04:55.547087  952029 main.go:141] libmachine: (addons-677681)     
	I0314 18:04:55.547097  952029 main.go:141] libmachine: (addons-677681)     
	I0314 18:04:55.547116  952029 main.go:141] libmachine: (addons-677681)   </devices>
	I0314 18:04:55.547131  952029 main.go:141] libmachine: (addons-677681) </domain>
	I0314 18:04:55.547145  952029 main.go:141] libmachine: (addons-677681) 
	I0314 18:04:55.551348  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:2b:3c:78 in network default
	I0314 18:04:55.551960  952029 main.go:141] libmachine: (addons-677681) Ensuring networks are active...
	I0314 18:04:55.551984  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:04:55.552641  952029 main.go:141] libmachine: (addons-677681) Ensuring network default is active
	I0314 18:04:55.552950  952029 main.go:141] libmachine: (addons-677681) Ensuring network mk-addons-677681 is active
	I0314 18:04:55.553594  952029 main.go:141] libmachine: (addons-677681) Getting domain xml...
	I0314 18:04:55.554363  952029 main.go:141] libmachine: (addons-677681) Creating domain...
	I0314 18:04:56.730112  952029 main.go:141] libmachine: (addons-677681) Waiting to get IP...
	I0314 18:04:56.730964  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:04:56.731365  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:04:56.731393  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:56.731342  952050 retry.go:31] will retry after 197.534475ms: waiting for machine to come up
	I0314 18:04:56.930920  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:04:56.931401  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:04:56.931428  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:56.931343  952050 retry.go:31] will retry after 370.857411ms: waiting for machine to come up
	I0314 18:04:57.304149  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:04:57.304656  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:04:57.304685  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:57.304596  952050 retry.go:31] will retry after 296.072634ms: waiting for machine to come up
	I0314 18:04:57.601953  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:04:57.602374  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:04:57.602402  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:57.602337  952050 retry.go:31] will retry after 600.807306ms: waiting for machine to come up
	I0314 18:04:58.205195  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:04:58.205592  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:04:58.205616  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:58.205545  952050 retry.go:31] will retry after 730.137623ms: waiting for machine to come up
	I0314 18:04:58.936958  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:04:58.937391  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:04:58.937423  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:58.937339  952050 retry.go:31] will retry after 897.981382ms: waiting for machine to come up
	I0314 18:04:59.837089  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:04:59.837514  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:04:59.837557  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:04:59.837441  952050 retry.go:31] will retry after 965.691199ms: waiting for machine to come up
	I0314 18:05:00.804672  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:00.804957  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:05:00.804991  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:05:00.804924  952050 retry.go:31] will retry after 1.283730582s: waiting for machine to come up
	I0314 18:05:02.091162  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:02.091665  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:05:02.091698  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:05:02.091610  952050 retry.go:31] will retry after 1.63830509s: waiting for machine to come up
	I0314 18:05:03.732379  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:03.732808  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:05:03.732830  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:05:03.732744  952050 retry.go:31] will retry after 1.749635607s: waiting for machine to come up
	I0314 18:05:05.483671  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:05.484085  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:05:05.484111  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:05:05.484069  952050 retry.go:31] will retry after 1.836224245s: waiting for machine to come up
	I0314 18:05:07.323352  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:07.323799  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:05:07.323821  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:05:07.323777  952050 retry.go:31] will retry after 2.82743886s: waiting for machine to come up
	I0314 18:05:10.152674  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:10.153064  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:05:10.153097  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:05:10.153015  952050 retry.go:31] will retry after 3.480681287s: waiting for machine to come up
	I0314 18:05:13.637256  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:13.637716  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find current IP address of domain addons-677681 in network mk-addons-677681
	I0314 18:05:13.637763  952029 main.go:141] libmachine: (addons-677681) DBG | I0314 18:05:13.637610  952050 retry.go:31] will retry after 4.529173433s: waiting for machine to come up
	I0314 18:05:18.171491  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.171933  952029 main.go:141] libmachine: (addons-677681) Found IP for machine: 192.168.39.215
	I0314 18:05:18.171986  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has current primary IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.171998  952029 main.go:141] libmachine: (addons-677681) Reserving static IP address...
	I0314 18:05:18.172348  952029 main.go:141] libmachine: (addons-677681) DBG | unable to find host DHCP lease matching {name: "addons-677681", mac: "52:54:00:54:90:2b", ip: "192.168.39.215"} in network mk-addons-677681
	I0314 18:05:18.248764  952029 main.go:141] libmachine: (addons-677681) DBG | Getting to WaitForSSH function...
	I0314 18:05:18.248799  952029 main.go:141] libmachine: (addons-677681) Reserved static IP address: 192.168.39.215
	I0314 18:05:18.248840  952029 main.go:141] libmachine: (addons-677681) Waiting for SSH to be available...
	I0314 18:05:18.251411  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.251796  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:minikube Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:18.251823  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.252002  952029 main.go:141] libmachine: (addons-677681) DBG | Using SSH client type: external
	I0314 18:05:18.252033  952029 main.go:141] libmachine: (addons-677681) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa (-rw-------)
	I0314 18:05:18.252084  952029 main.go:141] libmachine: (addons-677681) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.215 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 18:05:18.252101  952029 main.go:141] libmachine: (addons-677681) DBG | About to run SSH command:
	I0314 18:05:18.252122  952029 main.go:141] libmachine: (addons-677681) DBG | exit 0
	I0314 18:05:18.376713  952029 main.go:141] libmachine: (addons-677681) DBG | SSH cmd err, output: <nil>: 
	I0314 18:05:18.376906  952029 main.go:141] libmachine: (addons-677681) KVM machine creation complete!
	I0314 18:05:18.377305  952029 main.go:141] libmachine: (addons-677681) Calling .GetConfigRaw
	I0314 18:05:18.378047  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:18.378256  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:18.378435  952029 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 18:05:18.378453  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:18.379781  952029 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 18:05:18.379802  952029 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 18:05:18.379808  952029 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 18:05:18.379814  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:18.381953  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.382280  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:18.382321  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.382473  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:18.382644  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:18.382813  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:18.382973  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:18.383169  952029 main.go:141] libmachine: Using SSH client type: native
	I0314 18:05:18.383365  952029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0314 18:05:18.383379  952029 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 18:05:18.487511  952029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:05:18.487538  952029 main.go:141] libmachine: Detecting the provisioner...
	I0314 18:05:18.487547  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:18.490339  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.490757  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:18.490787  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.490952  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:18.491157  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:18.491342  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:18.491490  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:18.491639  952029 main.go:141] libmachine: Using SSH client type: native
	I0314 18:05:18.491787  952029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0314 18:05:18.491799  952029 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 18:05:18.597092  952029 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 18:05:18.597192  952029 main.go:141] libmachine: found compatible host: buildroot
	I0314 18:05:18.597209  952029 main.go:141] libmachine: Provisioning with buildroot...
	I0314 18:05:18.597219  952029 main.go:141] libmachine: (addons-677681) Calling .GetMachineName
	I0314 18:05:18.597517  952029 buildroot.go:166] provisioning hostname "addons-677681"
	I0314 18:05:18.597548  952029 main.go:141] libmachine: (addons-677681) Calling .GetMachineName
	I0314 18:05:18.597767  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:18.600176  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.600586  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:18.600615  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.600798  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:18.601008  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:18.601169  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:18.601296  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:18.601448  952029 main.go:141] libmachine: Using SSH client type: native
	I0314 18:05:18.601645  952029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0314 18:05:18.601664  952029 main.go:141] libmachine: About to run SSH command:
	sudo hostname addons-677681 && echo "addons-677681" | sudo tee /etc/hostname
	I0314 18:05:18.719771  952029 main.go:141] libmachine: SSH cmd err, output: <nil>: addons-677681
	
	I0314 18:05:18.719820  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:18.722337  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.722655  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:18.722690  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.722878  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:18.723161  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:18.723358  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:18.723552  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:18.723759  952029 main.go:141] libmachine: Using SSH client type: native
	I0314 18:05:18.723990  952029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0314 18:05:18.724016  952029 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\saddons-677681' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 addons-677681/g' /etc/hosts;
				else 
					echo '127.0.1.1 addons-677681' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:05:18.839003  952029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:05:18.839047  952029 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:05:18.839128  952029 buildroot.go:174] setting up certificates
	I0314 18:05:18.839149  952029 provision.go:84] configureAuth start
	I0314 18:05:18.839169  952029 main.go:141] libmachine: (addons-677681) Calling .GetMachineName
	I0314 18:05:18.839485  952029 main.go:141] libmachine: (addons-677681) Calling .GetIP
	I0314 18:05:18.842100  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.842525  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:18.842556  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.842667  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:18.844882  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.845225  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:18.845254  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:18.845406  952029 provision.go:143] copyHostCerts
	I0314 18:05:18.845548  952029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:05:18.845692  952029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:05:18.845775  952029 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:05:18.845837  952029 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.addons-677681 san=[127.0.0.1 192.168.39.215 addons-677681 localhost minikube]
	I0314 18:05:19.005271  952029 provision.go:177] copyRemoteCerts
	I0314 18:05:19.005345  952029 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:05:19.005380  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:19.008118  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.008475  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:19.008506  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.008683  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:19.008926  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:19.009085  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:19.009369  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:19.091592  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:05:19.117419  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 18:05:19.142410  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 18:05:19.168643  952029 provision.go:87] duration metric: took 329.474978ms to configureAuth
	I0314 18:05:19.168674  952029 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:05:19.168871  952029 config.go:182] Loaded profile config "addons-677681": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:05:19.168965  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:19.171579  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.171963  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:19.171993  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.172127  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:19.172371  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:19.172535  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:19.172734  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:19.172889  952029 main.go:141] libmachine: Using SSH client type: native
	I0314 18:05:19.173105  952029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0314 18:05:19.173127  952029 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:05:19.444043  952029 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:05:19.444084  952029 main.go:141] libmachine: Checking connection to Docker...
	I0314 18:05:19.444097  952029 main.go:141] libmachine: (addons-677681) Calling .GetURL
	I0314 18:05:19.445371  952029 main.go:141] libmachine: (addons-677681) DBG | Using libvirt version 6000000
	I0314 18:05:19.447952  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.448417  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:19.448507  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.448607  952029 main.go:141] libmachine: Docker is up and running!
	I0314 18:05:19.448622  952029 main.go:141] libmachine: Reticulating splines...
	I0314 18:05:19.448631  952029 client.go:171] duration metric: took 24.998249233s to LocalClient.Create
	I0314 18:05:19.448657  952029 start.go:167] duration metric: took 24.998323887s to libmachine.API.Create "addons-677681"
	I0314 18:05:19.448669  952029 start.go:293] postStartSetup for "addons-677681" (driver="kvm2")
	I0314 18:05:19.448682  952029 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:05:19.448700  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:19.448920  952029 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:05:19.448940  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:19.451293  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.451629  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:19.451652  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.451790  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:19.451964  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:19.452149  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:19.452282  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:19.536256  952029 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:05:19.541461  952029 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:05:19.541491  952029 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:05:19.541556  952029 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:05:19.541580  952029 start.go:296] duration metric: took 92.902729ms for postStartSetup
	I0314 18:05:19.541617  952029 main.go:141] libmachine: (addons-677681) Calling .GetConfigRaw
	I0314 18:05:19.542186  952029 main.go:141] libmachine: (addons-677681) Calling .GetIP
	I0314 18:05:19.544920  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.545424  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:19.545454  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.545746  952029 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/config.json ...
	I0314 18:05:19.545949  952029 start.go:128] duration metric: took 25.113121983s to createHost
	I0314 18:05:19.545978  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:19.548523  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.548858  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:19.548890  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.549007  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:19.549183  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:19.549336  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:19.549474  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:19.549614  952029 main.go:141] libmachine: Using SSH client type: native
	I0314 18:05:19.549793  952029 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.215 22 <nil> <nil>}
	I0314 18:05:19.549808  952029 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:05:19.653430  952029 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710439519.635920852
	
	I0314 18:05:19.653455  952029 fix.go:216] guest clock: 1710439519.635920852
	I0314 18:05:19.653462  952029 fix.go:229] Guest: 2024-03-14 18:05:19.635920852 +0000 UTC Remote: 2024-03-14 18:05:19.545962657 +0000 UTC m=+25.229010240 (delta=89.958195ms)
	I0314 18:05:19.653509  952029 fix.go:200] guest clock delta is within tolerance: 89.958195ms
	I0314 18:05:19.653514  952029 start.go:83] releasing machines lock for "addons-677681", held for 25.220756484s
	I0314 18:05:19.653536  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:19.653842  952029 main.go:141] libmachine: (addons-677681) Calling .GetIP
	I0314 18:05:19.656497  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.656850  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:19.656879  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.657048  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:19.657590  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:19.657735  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:19.657868  952029 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:05:19.657917  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:19.657952  952029 ssh_runner.go:195] Run: cat /version.json
	I0314 18:05:19.657978  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:19.660547  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.660605  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.660936  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:19.660980  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:19.661001  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.661015  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:19.661159  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:19.661285  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:19.661389  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:19.661451  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:19.661577  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:19.661623  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:19.661729  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:19.661821  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:19.761767  952029 ssh_runner.go:195] Run: systemctl --version
	I0314 18:05:19.770938  952029 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:05:19.933983  952029 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:05:19.940504  952029 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:05:19.940582  952029 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:05:19.958611  952029 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:05:19.958640  952029 start.go:494] detecting cgroup driver to use...
	I0314 18:05:19.958702  952029 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:05:19.974523  952029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:05:19.988815  952029 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:05:19.988875  952029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:05:20.002717  952029 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:05:20.017491  952029 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:05:20.136087  952029 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:05:20.278222  952029 docker.go:233] disabling docker service ...
	I0314 18:05:20.278301  952029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:05:20.294052  952029 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:05:20.308154  952029 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:05:20.445803  952029 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:05:20.562079  952029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:05:20.577052  952029 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:05:20.598161  952029 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:05:20.598229  952029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:05:20.610815  952029 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:05:20.610896  952029 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:05:20.623571  952029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:05:20.635569  952029 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:05:20.647909  952029 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:05:20.661156  952029 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:05:20.672587  952029 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 18:05:20.672657  952029 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 18:05:20.689534  952029 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:05:20.701074  952029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:05:20.816439  952029 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:05:20.961022  952029 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:05:20.961158  952029 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:05:20.966499  952029 start.go:562] Will wait 60s for crictl version
	I0314 18:05:20.966566  952029 ssh_runner.go:195] Run: which crictl
	I0314 18:05:20.970909  952029 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:05:21.010243  952029 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:05:21.010334  952029 ssh_runner.go:195] Run: crio --version
	I0314 18:05:21.038637  952029 ssh_runner.go:195] Run: crio --version
	I0314 18:05:21.070486  952029 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:05:21.071764  952029 main.go:141] libmachine: (addons-677681) Calling .GetIP
	I0314 18:05:21.074291  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:21.074642  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:21.074675  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:21.074823  952029 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:05:21.079474  952029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:05:21.093602  952029 kubeadm.go:877] updating cluster {Name:addons-677681 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:addons-677681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountT
ype:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:05:21.093734  952029 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:05:21.093791  952029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:05:21.130208  952029 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 18:05:21.130283  952029 ssh_runner.go:195] Run: which lz4
	I0314 18:05:21.134913  952029 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 18:05:21.139864  952029 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 18:05:21.139890  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 18:05:22.891832  952029 crio.go:444] duration metric: took 1.7569478s to copy over tarball
	I0314 18:05:22.891934  952029 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 18:05:25.691833  952029 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.799863489s)
	I0314 18:05:25.691875  952029 crio.go:451] duration metric: took 2.800005594s to extract the tarball
	I0314 18:05:25.691883  952029 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 18:05:25.735280  952029 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:05:25.788238  952029 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:05:25.788275  952029 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:05:25.788287  952029 kubeadm.go:928] updating node { 192.168.39.215 8443 v1.28.4 crio true true} ...
	I0314 18:05:25.788413  952029 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=addons-677681 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.215
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:addons-677681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:05:25.788495  952029 ssh_runner.go:195] Run: crio config
	I0314 18:05:25.841757  952029 cni.go:84] Creating CNI manager for ""
	I0314 18:05:25.841786  952029 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 18:05:25.841800  952029 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:05:25.841823  952029 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.215 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:addons-677681 NodeName:addons-677681 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.215"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.215 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/k
ubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:05:25.841987  952029 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.215
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "addons-677681"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.215
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.215"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:05:25.842054  952029 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:05:25.853473  952029 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:05:25.853562  952029 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 18:05:25.864507  952029 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0314 18:05:25.882676  952029 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:05:25.900617  952029 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0314 18:05:25.925581  952029 ssh_runner.go:195] Run: grep 192.168.39.215	control-plane.minikube.internal$ /etc/hosts
	I0314 18:05:25.930550  952029 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.215	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:05:25.944320  952029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:05:26.063518  952029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:05:26.081820  952029 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681 for IP: 192.168.39.215
	I0314 18:05:26.081852  952029 certs.go:194] generating shared ca certs ...
	I0314 18:05:26.081875  952029 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:26.082028  952029 certs.go:240] generating "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:05:26.312178  952029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt ...
	I0314 18:05:26.312224  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt: {Name:mka32d65838429432bcdcf271494262fd087d349 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:26.312411  952029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key ...
	I0314 18:05:26.312423  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key: {Name:mk0bec2b2cbb8f2ad8735a02247f3bc5b24fb7b3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:26.312499  952029 certs.go:240] generating "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:05:26.441240  952029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt ...
	I0314 18:05:26.441266  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt: {Name:mk9014c12e34a80d98621b380e7e1dd821c7ada5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:26.441416  952029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key ...
	I0314 18:05:26.441426  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key: {Name:mkeeb5c4f391be10bb85512a67ae1a9d7362652b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:26.441493  952029 certs.go:256] generating profile certs ...
	I0314 18:05:26.441553  952029 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.key
	I0314 18:05:26.441571  952029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt with IP's: []
	I0314 18:05:26.539331  952029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt ...
	I0314 18:05:26.539360  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: {Name:mka43f4842de3e42d72f3b8cea2fa6fbec8facb7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:26.539506  952029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.key ...
	I0314 18:05:26.539516  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.key: {Name:mka6e2a335f344fb5a7964a947534ca5abda58e1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:26.539584  952029 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.key.12f32c90
	I0314 18:05:26.539604  952029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.crt.12f32c90 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.215]
	I0314 18:05:26.757612  952029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.crt.12f32c90 ...
	I0314 18:05:26.757648  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.crt.12f32c90: {Name:mkc5ecdae19ee05875441268fce5f7a158f8c0b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:26.757808  952029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.key.12f32c90 ...
	I0314 18:05:26.757822  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.key.12f32c90: {Name:mk2d56c7f33c07646d104229eb2fd1910faa834c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:26.757891  952029 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.crt.12f32c90 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.crt
	I0314 18:05:26.757982  952029 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.key.12f32c90 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.key
	I0314 18:05:26.758034  952029 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/proxy-client.key
	I0314 18:05:26.758053  952029 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/proxy-client.crt with IP's: []
	I0314 18:05:27.019408  952029 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/proxy-client.crt ...
	I0314 18:05:27.019441  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/proxy-client.crt: {Name:mk5cc3ef71f4482d6c88931559a3dacec894c4b2 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:27.019596  952029 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/proxy-client.key ...
	I0314 18:05:27.019610  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/proxy-client.key: {Name:mk471a7d8063ae5c4f7cc6c13dfb2620c9f38b30 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:27.019770  952029 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:05:27.019807  952029 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:05:27.019843  952029 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:05:27.019868  952029 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:05:27.020599  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:05:27.060315  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:05:27.086740  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:05:27.113764  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:05:27.140760  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1419 bytes)
	I0314 18:05:27.167586  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 18:05:27.192811  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:05:27.219120  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 18:05:27.246682  952029 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:05:27.275437  952029 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:05:27.293744  952029 ssh_runner.go:195] Run: openssl version
	I0314 18:05:27.299925  952029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:05:27.311022  952029 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:05:27.315924  952029 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:05:27.315976  952029 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:05:27.322105  952029 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:05:27.333423  952029 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:05:27.337900  952029 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:05:27.337951  952029 kubeadm.go:391] StartCluster: {Name:addons-677681 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 C
lusterName:addons-677681 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:05:27.338059  952029 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 18:05:27.338135  952029 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 18:05:27.380081  952029 cri.go:89] found id: ""
	I0314 18:05:27.380168  952029 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 18:05:27.393974  952029 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 18:05:27.407344  952029 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 18:05:27.417523  952029 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 18:05:27.417542  952029 kubeadm.go:156] found existing configuration files:
	
	I0314 18:05:27.417580  952029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 18:05:27.427010  952029 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 18:05:27.427057  952029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 18:05:27.436564  952029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 18:05:27.445794  952029 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 18:05:27.445850  952029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 18:05:27.455553  952029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 18:05:27.464862  952029 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 18:05:27.464932  952029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 18:05:27.474387  952029 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 18:05:27.483444  952029 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 18:05:27.483487  952029 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 18:05:27.492985  952029 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 18:05:27.545948  952029 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 18:05:27.546062  952029 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 18:05:27.687974  952029 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 18:05:27.688106  952029 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 18:05:27.688237  952029 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 18:05:27.913420  952029 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 18:05:28.052074  952029 out.go:204]   - Generating certificates and keys ...
	I0314 18:05:28.052202  952029 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 18:05:28.052313  952029 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 18:05:28.052435  952029 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 18:05:28.122411  952029 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 18:05:28.373576  952029 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 18:05:28.580952  952029 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 18:05:28.684129  952029 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 18:05:28.684495  952029 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [addons-677681 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0314 18:05:28.908102  952029 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 18:05:28.908437  952029 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [addons-677681 localhost] and IPs [192.168.39.215 127.0.0.1 ::1]
	I0314 18:05:29.100789  952029 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 18:05:29.255813  952029 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 18:05:29.488088  952029 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 18:05:29.488469  952029 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 18:05:29.612684  952029 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 18:05:29.754140  952029 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 18:05:29.971815  952029 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 18:05:30.177561  952029 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 18:05:30.178204  952029 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 18:05:30.182572  952029 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 18:05:30.184424  952029 out.go:204]   - Booting up control plane ...
	I0314 18:05:30.184514  952029 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 18:05:30.184588  952029 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 18:05:30.184651  952029 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 18:05:30.201354  952029 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 18:05:30.203674  952029 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 18:05:30.203973  952029 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 18:05:30.327720  952029 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 18:05:35.828552  952029 kubeadm.go:309] [apiclient] All control plane components are healthy after 5.502204 seconds
	I0314 18:05:35.828674  952029 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 18:05:35.845455  952029 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 18:05:36.382757  952029 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 18:05:36.382970  952029 kubeadm.go:309] [mark-control-plane] Marking the node addons-677681 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 18:05:36.898861  952029 kubeadm.go:309] [bootstrap-token] Using token: zopjne.sq9azt6gbxj5aw4t
	I0314 18:05:36.900434  952029 out.go:204]   - Configuring RBAC rules ...
	I0314 18:05:36.900567  952029 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 18:05:36.905981  952029 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 18:05:36.914145  952029 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 18:05:36.920965  952029 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 18:05:36.924854  952029 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 18:05:36.928312  952029 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 18:05:36.942839  952029 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 18:05:37.184202  952029 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 18:05:37.318257  952029 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 18:05:37.318780  952029 kubeadm.go:309] 
	I0314 18:05:37.318862  952029 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 18:05:37.318875  952029 kubeadm.go:309] 
	I0314 18:05:37.318962  952029 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 18:05:37.318971  952029 kubeadm.go:309] 
	I0314 18:05:37.319005  952029 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 18:05:37.319073  952029 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 18:05:37.319171  952029 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 18:05:37.319186  952029 kubeadm.go:309] 
	I0314 18:05:37.319256  952029 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 18:05:37.319265  952029 kubeadm.go:309] 
	I0314 18:05:37.319333  952029 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 18:05:37.319341  952029 kubeadm.go:309] 
	I0314 18:05:37.319414  952029 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 18:05:37.319509  952029 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 18:05:37.319606  952029 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 18:05:37.319617  952029 kubeadm.go:309] 
	I0314 18:05:37.319731  952029 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 18:05:37.319858  952029 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 18:05:37.319888  952029 kubeadm.go:309] 
	I0314 18:05:37.320026  952029 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token zopjne.sq9azt6gbxj5aw4t \
	I0314 18:05:37.320167  952029 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 18:05:37.320200  952029 kubeadm.go:309] 	--control-plane 
	I0314 18:05:37.320218  952029 kubeadm.go:309] 
	I0314 18:05:37.320346  952029 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 18:05:37.320353  952029 kubeadm.go:309] 
	I0314 18:05:37.320470  952029 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token zopjne.sq9azt6gbxj5aw4t \
	I0314 18:05:37.320607  952029 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 18:05:37.320922  952029 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 18:05:37.320945  952029 cni.go:84] Creating CNI manager for ""
	I0314 18:05:37.320958  952029 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 18:05:37.322613  952029 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 18:05:37.323884  952029 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 18:05:37.348445  952029 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 18:05:37.431036  952029 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 18:05:37.431171  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:37.431200  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes addons-677681 minikube.k8s.io/updated_at=2024_03_14T18_05_37_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=addons-677681 minikube.k8s.io/primary=true
	I0314 18:05:37.490542  952029 ops.go:34] apiserver oom_adj: -16
	I0314 18:05:37.572893  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:38.073435  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:38.573744  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:39.073680  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:39.573508  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:40.072908  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:40.573123  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:41.073881  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:41.573069  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:42.073137  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:42.573869  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:43.073272  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:43.573499  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:44.073637  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:44.573688  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:45.073608  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:45.573853  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:46.073178  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:46.573009  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:47.073565  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:47.573626  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:48.073873  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:48.573445  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:49.073649  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:49.573475  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:50.073051  952029 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:05:50.172562  952029 kubeadm.go:1106] duration metric: took 12.741471514s to wait for elevateKubeSystemPrivileges
	W0314 18:05:50.172632  952029 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 18:05:50.172647  952029 kubeadm.go:393] duration metric: took 22.834699182s to StartCluster
	I0314 18:05:50.172704  952029 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:50.172885  952029 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:05:50.173330  952029 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:05:50.173542  952029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 18:05:50.173565  952029 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.215 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:05:50.175465  952029 out.go:177] * Verifying Kubernetes components...
	I0314 18:05:50.173655  952029 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:true csi-hostpath-driver:true dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:true gvisor:false headlamp:false helm-tiller:true inaccel:false ingress:true ingress-dns:true inspektor-gadget:true istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:true nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:true registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:true volumesnapshots:true yakd:true]
	I0314 18:05:50.173802  952029 config.go:182] Loaded profile config "addons-677681": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:05:50.177419  952029 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:05:50.177436  952029 addons.go:69] Setting helm-tiller=true in profile "addons-677681"
	I0314 18:05:50.177452  952029 addons.go:69] Setting nvidia-device-plugin=true in profile "addons-677681"
	I0314 18:05:50.177479  952029 addons.go:234] Setting addon helm-tiller=true in "addons-677681"
	I0314 18:05:50.177485  952029 addons.go:69] Setting registry=true in profile "addons-677681"
	I0314 18:05:50.177492  952029 addons.go:69] Setting ingress-dns=true in profile "addons-677681"
	I0314 18:05:50.177496  952029 addons.go:69] Setting metrics-server=true in profile "addons-677681"
	I0314 18:05:50.177504  952029 addons.go:69] Setting default-storageclass=true in profile "addons-677681"
	I0314 18:05:50.177527  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.177522  952029 addons.go:69] Setting gcp-auth=true in profile "addons-677681"
	I0314 18:05:50.177533  952029 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "addons-677681"
	I0314 18:05:50.177541  952029 addons.go:234] Setting addon metrics-server=true in "addons-677681"
	I0314 18:05:50.177542  952029 addons.go:234] Setting addon ingress-dns=true in "addons-677681"
	I0314 18:05:50.177546  952029 addons.go:69] Setting volumesnapshots=true in profile "addons-677681"
	I0314 18:05:50.177553  952029 mustload.go:65] Loading cluster: addons-677681
	I0314 18:05:50.177566  952029 addons.go:234] Setting addon volumesnapshots=true in "addons-677681"
	I0314 18:05:50.177548  952029 addons.go:69] Setting storage-provisioner-rancher=true in profile "addons-677681"
	I0314 18:05:50.177584  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.177597  952029 addons_storage_classes.go:33] enableOrDisableStorageClasses storage-provisioner-rancher=true on "addons-677681"
	I0314 18:05:50.177490  952029 addons.go:69] Setting cloud-spanner=true in profile "addons-677681"
	I0314 18:05:50.177603  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.177629  952029 addons.go:234] Setting addon cloud-spanner=true in "addons-677681"
	I0314 18:05:50.177651  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.177478  952029 addons.go:69] Setting csi-hostpath-driver=true in profile "addons-677681"
	I0314 18:05:50.177733  952029 addons.go:234] Setting addon csi-hostpath-driver=true in "addons-677681"
	I0314 18:05:50.177756  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.177780  952029 config.go:182] Loaded profile config "addons-677681": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:05:50.178056  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.178073  952029 addons.go:69] Setting storage-provisioner=true in profile "addons-677681"
	I0314 18:05:50.178076  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.178081  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.177536  952029 addons.go:234] Setting addon registry=true in "addons-677681"
	I0314 18:05:50.178093  952029 addons.go:234] Setting addon storage-provisioner=true in "addons-677681"
	I0314 18:05:50.177443  952029 addons.go:69] Setting yakd=true in profile "addons-677681"
	I0314 18:05:50.178120  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.178121  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.177600  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.178131  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.178134  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.178156  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.178168  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.178199  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.178059  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.178305  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.178349  952029 addons.go:69] Setting inspektor-gadget=true in profile "addons-677681"
	I0314 18:05:50.178371  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.178379  952029 addons.go:234] Setting addon inspektor-gadget=true in "addons-677681"
	I0314 18:05:50.178388  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.178423  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.178081  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.178439  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.178457  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.178084  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.178457  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.178489  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.178456  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.177479  952029 addons.go:234] Setting addon nvidia-device-plugin=true in "addons-677681"
	I0314 18:05:50.178124  952029 addons.go:234] Setting addon yakd=true in "addons-677681"
	I0314 18:05:50.177485  952029 addons.go:69] Setting ingress=true in profile "addons-677681"
	I0314 18:05:50.178458  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.178556  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.178561  952029 addons.go:234] Setting addon ingress=true in "addons-677681"
	I0314 18:05:50.178119  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.178877  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.178919  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.179258  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.179284  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.179295  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.179349  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.179533  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.179894  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.179912  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.199132  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39865
	I0314 18:05:50.199171  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46725
	I0314 18:05:50.199304  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36047
	I0314 18:05:50.199686  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.199798  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.200030  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.200274  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40893
	I0314 18:05:50.200282  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.200298  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.200303  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.200320  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.200496  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.200513  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.200696  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.200783  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.200819  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.200876  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.200892  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.201406  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.201448  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.201744  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.201761  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.202037  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.202083  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.202743  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.202804  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37573
	I0314 18:05:50.203266  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.203272  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.203802  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.203827  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.204924  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.204957  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.206413  952029 addons.go:234] Setting addon default-storageclass=true in "addons-677681"
	I0314 18:05:50.206457  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.206829  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.206860  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.207353  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.207420  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.207772  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.207797  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.215575  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.218605  952029 addons.go:234] Setting addon storage-provisioner-rancher=true in "addons-677681"
	I0314 18:05:50.218665  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:50.219071  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.219132  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.226884  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38827
	I0314 18:05:50.229055  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.229724  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.229746  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.230173  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.230877  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.230923  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.234793  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33547
	I0314 18:05:50.235361  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.236010  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.236029  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.236507  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.237086  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.237114  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.238569  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38649
	I0314 18:05:50.242353  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42555
	I0314 18:05:50.242916  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.243547  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.243565  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.243991  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.244245  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.246420  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38117
	I0314 18:05:50.247071  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.247185  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42301
	I0314 18:05:50.247356  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.247553  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.248082  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.248132  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.248305  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.248319  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.248757  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.248856  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.249528  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.249557  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.249769  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.251841  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.254110  952029 out.go:177]   - Using image ghcr.io/helm/tiller:v2.17.0
	I0314 18:05:50.252318  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38491
	I0314 18:05:50.252510  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.253680  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.253782  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42455
	I0314 18:05:50.254498  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45923
	I0314 18:05:50.254667  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45289
	I0314 18:05:50.256073  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.256324  952029 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-dp.yaml
	I0314 18:05:50.256340  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-dp.yaml (2422 bytes)
	I0314 18:05:50.256359  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.258292  952029 out.go:177]   - Using image docker.io/marcnuri/yakd:0.0.4
	I0314 18:05:50.256945  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.257190  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.257242  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.257895  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.258420  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.259912  952029 addons.go:426] installing /etc/kubernetes/addons/yakd-ns.yaml
	I0314 18:05:50.259926  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-ns.yaml (171 bytes)
	I0314 18:05:50.259947  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.259999  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.260560  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.260594  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.260604  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.260616  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.260769  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.260780  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.261162  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.261332  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.261353  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.261943  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.261968  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.262184  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.262255  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.262851  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.262851  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.262888  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.262908  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.264177  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.264701  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.264729  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.265088  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.265669  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.265689  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.265737  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40577
	I0314 18:05:50.266148  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.266217  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.266262  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.266621  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.266648  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.266684  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46381
	I0314 18:05:50.266812  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.266826  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.266948  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.267188  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.267257  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.267300  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.267390  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41825
	I0314 18:05:50.267969  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.268024  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.268206  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.268632  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.268648  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.268798  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.268809  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.268869  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35809
	I0314 18:05:50.269018  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.269160  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.269624  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.269658  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.270482  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.270521  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.270780  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.270884  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.270969  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44775
	I0314 18:05:50.271394  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.271413  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.271814  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.271950  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.271965  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.272531  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.272569  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.273085  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.273633  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:50.273667  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:50.273874  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.274332  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.274372  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.274829  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.275027  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.279282  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46085
	I0314 18:05:50.279830  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.280411  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.280432  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.281055  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.281278  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.283140  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.285334  952029 out.go:177]   - Using image nvcr.io/nvidia/k8s-device-plugin:v0.14.5
	I0314 18:05:50.286809  952029 addons.go:426] installing /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0314 18:05:50.286829  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/nvidia-device-plugin.yaml (1966 bytes)
	I0314 18:05:50.286850  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.290318  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.290784  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.290810  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.290877  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44981
	I0314 18:05:50.291111  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.291332  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.291484  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.291566  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.291636  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.292515  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.292541  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.292928  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.293106  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.294636  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.296388  952029 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 18:05:50.297650  952029 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:05:50.297669  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 18:05:50.297688  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.299776  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35513
	I0314 18:05:50.300197  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.300852  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.300870  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.301395  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.301431  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.301667  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.301858  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.301902  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.302080  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.302272  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.302429  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.302608  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.303246  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.305380  952029 out.go:177]   - Using image gcr.io/k8s-minikube/minikube-ingress-dns:0.0.2
	I0314 18:05:50.306702  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33113
	I0314 18:05:50.306402  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45341
	I0314 18:05:50.306807  952029 addons.go:426] installing /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0314 18:05:50.306823  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-dns-pod.yaml (2442 bytes)
	I0314 18:05:50.306842  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.307529  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.307692  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.308349  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.308368  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.308512  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.308533  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.308930  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.309002  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.309281  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.309346  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.309676  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37413
	I0314 18:05:50.310226  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.310801  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.310828  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.311344  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.311537  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.311705  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.313231  952029 out.go:177]   - Using image docker.io/registry:2.8.3
	I0314 18:05:50.312359  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.312532  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.313033  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46691
	I0314 18:05:50.313169  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.314176  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35363
	I0314 18:05:50.316162  952029 out.go:177]   - Using image gcr.io/k8s-minikube/kube-registry-proxy:0.0.5
	I0314 18:05:50.314849  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.314869  952029 out.go:177]   - Using image registry.k8s.io/ingress-nginx/controller:v1.10.0
	I0314 18:05:50.315193  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.315337  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.315813  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.316612  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.318015  952029 addons.go:426] installing /etc/kubernetes/addons/registry-rc.yaml
	I0314 18:05:50.318027  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-rc.yaml (798 bytes)
	I0314 18:05:50.318047  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.318103  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.318459  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.318715  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.319677  952029 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 18:05:50.322074  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37853
	I0314 18:05:50.319693  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.319707  952029 out.go:177]   - Using image registry.k8s.io/metrics-server/metrics-server:v0.7.0
	I0314 18:05:50.319934  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.320352  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.320898  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.321449  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.322429  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.323030  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37051
	I0314 18:05:50.323600  952029 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 18:05:50.325230  952029 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 18:05:50.325257  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 18:05:50.325278  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.323662  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.325351  952029 addons.go:426] installing /etc/kubernetes/addons/ingress-deploy.yaml
	I0314 18:05:50.323715  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.323889  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.323932  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.324087  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.324664  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32931
	I0314 18:05:50.325438  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.324756  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.325484  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.325369  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ingress-deploy.yaml (16078 bytes)
	I0314 18:05:50.325501  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.325669  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.325728  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.325859  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.325929  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.326298  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.326632  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.326658  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.326871  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.327023  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.327100  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.327663  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.327683  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.327741  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.327847  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37197
	I0314 18:05:50.328005  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.328153  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.328235  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.328412  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.328923  952029 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 18:05:50.328939  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 18:05:50.328956  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.329108  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:50.329820  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:50.329839  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:50.330503  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.330633  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:50.330934  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.330961  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:50.331005  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.331087  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.331109  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.331273  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.333512  952029 out.go:177]   - Using image registry.k8s.io/sig-storage/snapshot-controller:v6.1.0
	I0314 18:05:50.331638  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.331684  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.331693  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.331883  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.333651  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.333659  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.334067  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.334289  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.334455  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:50.334929  952029 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml
	I0314 18:05:50.334940  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml (934 bytes)
	I0314 18:05:50.334959  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.335014  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.335090  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.336550  952029 out.go:177]   - Using image gcr.io/cloud-spanner-emulator/emulator:1.5.14
	I0314 18:05:50.335112  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.335496  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.335523  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.335804  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.337782  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.337974  952029 addons.go:426] installing /etc/kubernetes/addons/deployment.yaml
	I0314 18:05:50.338334  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.339177  952029 out.go:177]   - Using image registry.k8s.io/sig-storage/livenessprobe:v2.8.0
	I0314 18:05:50.339310  952029 out.go:177]   - Using image docker.io/rancher/local-path-provisioner:v0.0.22
	I0314 18:05:50.340539  952029 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-resizer:v1.6.0
	I0314 18:05:50.339364  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.339386  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/deployment.yaml (1004 bytes)
	I0314 18:05:50.339229  952029 out.go:177]   - Using image ghcr.io/inspektor-gadget/inspektor-gadget:v0.26.0
	I0314 18:05:50.339467  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.339469  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.339573  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.339632  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.341785  952029 out.go:177]   - Using image docker.io/busybox:stable
	I0314 18:05:50.341808  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.341813  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.342017  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.342022  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.342035  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.344435  952029 addons.go:426] installing /etc/kubernetes/addons/ig-namespace.yaml
	I0314 18:05:50.345169  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.345627  952029 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-snapshotter:v6.1.0
	I0314 18:05:50.345663  952029 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0314 18:05:50.345678  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-namespace.yaml (55 bytes)
	I0314 18:05:50.347093  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.347147  952029 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-provisioner:v3.3.0
	I0314 18:05:50.347266  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner-rancher.yaml (3113 bytes)
	I0314 18:05:50.347301  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.347757  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.348517  952029 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-attacher:v4.0.0
	I0314 18:05:50.349824  952029 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-external-health-monitor-controller:v0.7.0
	I0314 18:05:50.348604  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.348629  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.348682  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.349528  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.349897  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.349921  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.349936  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.350026  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.350449  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.351402  952029 out.go:177]   - Using image registry.k8s.io/sig-storage/csi-node-driver-registrar:v2.6.0
	I0314 18:05:50.351557  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.351616  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.352204  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.353060  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.353081  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.352707  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.353026  952029 out.go:177]   - Using image registry.k8s.io/sig-storage/hostpathplugin:v1.9.0
	I0314 18:05:50.354575  952029 addons.go:426] installing /etc/kubernetes/addons/rbac-external-attacher.yaml
	I0314 18:05:50.354594  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-attacher.yaml (3073 bytes)
	I0314 18:05:50.353288  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.354612  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:50.353299  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.354773  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.354901  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.355082  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.357288  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.357705  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:50.357735  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:50.357858  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:50.357989  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:50.358097  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:50.358258  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:50.686384  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml
	I0314 18:05:50.686829  952029 addons.go:426] installing /etc/kubernetes/addons/registry-svc.yaml
	I0314 18:05:50.686865  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-svc.yaml (398 bytes)
	I0314 18:05:50.746026  952029 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:05:50.746130  952029 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 18:05:50.764588  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml
	I0314 18:05:50.797313  952029 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 18:05:50.797341  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1907 bytes)
	I0314 18:05:50.828077  952029 addons.go:426] installing /etc/kubernetes/addons/registry-proxy.yaml
	I0314 18:05:50.828108  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/registry-proxy.yaml (947 bytes)
	I0314 18:05:50.904186  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml
	I0314 18:05:50.925390  952029 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-rbac.yaml
	I0314 18:05:50.925416  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-rbac.yaml (1188 bytes)
	I0314 18:05:50.945614  952029 addons.go:426] installing /etc/kubernetes/addons/yakd-sa.yaml
	I0314 18:05:50.945641  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-sa.yaml (247 bytes)
	I0314 18:05:50.972355  952029 addons.go:426] installing /etc/kubernetes/addons/ig-serviceaccount.yaml
	I0314 18:05:50.972393  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-serviceaccount.yaml (80 bytes)
	I0314 18:05:51.002019  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:05:51.045914  952029 addons.go:426] installing /etc/kubernetes/addons/rbac-hostpath.yaml
	I0314 18:05:51.045946  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-hostpath.yaml (4266 bytes)
	I0314 18:05:51.051491  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml
	I0314 18:05:51.067149  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml
	I0314 18:05:51.076202  952029 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml
	I0314 18:05:51.076245  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml (6471 bytes)
	I0314 18:05:51.131276  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml
	I0314 18:05:51.153414  952029 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 18:05:51.153436  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 18:05:51.159614  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 18:05:51.166111  952029 addons.go:426] installing /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0314 18:05:51.166146  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/helm-tiller-svc.yaml (951 bytes)
	I0314 18:05:51.216294  952029 addons.go:426] installing /etc/kubernetes/addons/yakd-crb.yaml
	I0314 18:05:51.216321  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-crb.yaml (422 bytes)
	I0314 18:05:51.244737  952029 addons.go:426] installing /etc/kubernetes/addons/ig-role.yaml
	I0314 18:05:51.244760  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-role.yaml (210 bytes)
	I0314 18:05:51.294998  952029 addons.go:426] installing /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml
	I0314 18:05:51.295030  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml (3038 bytes)
	I0314 18:05:51.325332  952029 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml
	I0314 18:05:51.325361  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml (23126 bytes)
	I0314 18:05:51.432625  952029 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 18:05:51.432665  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 18:05:51.461002  952029 addons.go:426] installing /etc/kubernetes/addons/yakd-svc.yaml
	I0314 18:05:51.461037  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-svc.yaml (412 bytes)
	I0314 18:05:51.476538  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml
	I0314 18:05:51.573238  952029 addons.go:426] installing /etc/kubernetes/addons/rbac-external-provisioner.yaml
	I0314 18:05:51.573276  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-provisioner.yaml (4442 bytes)
	I0314 18:05:51.586278  952029 addons.go:426] installing /etc/kubernetes/addons/ig-rolebinding.yaml
	I0314 18:05:51.586305  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-rolebinding.yaml (244 bytes)
	I0314 18:05:51.610860  952029 addons.go:426] installing /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml
	I0314 18:05:51.610889  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml (19582 bytes)
	I0314 18:05:51.694261  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 18:05:51.891559  952029 addons.go:426] installing /etc/kubernetes/addons/yakd-dp.yaml
	I0314 18:05:51.891583  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/yakd-dp.yaml (2017 bytes)
	I0314 18:05:51.936155  952029 addons.go:426] installing /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml
	I0314 18:05:51.936198  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml (3545 bytes)
	I0314 18:05:51.983430  952029 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrole.yaml
	I0314 18:05:51.983455  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrole.yaml (1485 bytes)
	I0314 18:05:51.989068  952029 addons.go:426] installing /etc/kubernetes/addons/rbac-external-resizer.yaml
	I0314 18:05:51.989086  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-resizer.yaml (2943 bytes)
	I0314 18:05:52.122119  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml
	I0314 18:05:52.352349  952029 addons.go:426] installing /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 18:05:52.352382  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml (1475 bytes)
	I0314 18:05:52.353675  952029 addons.go:426] installing /etc/kubernetes/addons/rbac-external-snapshotter.yaml
	I0314 18:05:52.353698  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/rbac-external-snapshotter.yaml (3149 bytes)
	I0314 18:05:52.452095  952029 addons.go:426] installing /etc/kubernetes/addons/ig-clusterrolebinding.yaml
	I0314 18:05:52.452143  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-clusterrolebinding.yaml (274 bytes)
	I0314 18:05:52.616065  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 18:05:52.632037  952029 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-attacher.yaml
	I0314 18:05:52.632073  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-attacher.yaml (2143 bytes)
	I0314 18:05:52.753486  952029 addons.go:426] installing /etc/kubernetes/addons/ig-crd.yaml
	I0314 18:05:52.753522  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-crd.yaml (5216 bytes)
	I0314 18:05:52.889839  952029 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml
	I0314 18:05:52.889889  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml (1274 bytes)
	I0314 18:05:53.190697  952029 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-plugin.yaml
	I0314 18:05:53.190731  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-plugin.yaml (8201 bytes)
	I0314 18:05:53.236423  952029 addons.go:426] installing /etc/kubernetes/addons/ig-daemonset.yaml
	I0314 18:05:53.236448  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/ig-daemonset.yaml (7735 bytes)
	I0314 18:05:53.809862  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml
	I0314 18:05:53.833111  952029 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-resizer.yaml
	I0314 18:05:53.833136  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-resizer.yaml (2191 bytes)
	I0314 18:05:54.236091  952029 addons.go:426] installing /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0314 18:05:54.236121  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/csi-hostpath-storageclass.yaml (846 bytes)
	I0314 18:05:54.487920  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml
	I0314 18:05:55.870577  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/deployment.yaml: (5.184154047s)
	I0314 18:05:55.870643  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:05:55.870662  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:05:55.870981  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:05:55.871008  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:05:55.871013  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:05:55.871025  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:05:55.871091  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:05:55.871348  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:05:55.871393  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:05:55.961887  952029 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (5.215714163s)
	I0314 18:05:55.961924  952029 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (5.215856695s)
	I0314 18:05:55.961971  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/nvidia-device-plugin.yaml: (5.197344186s)
	I0314 18:05:55.962012  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:05:55.962025  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:05:55.961929  952029 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0314 18:05:55.962302  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:05:55.962348  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:05:55.962370  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:05:55.962389  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:05:55.962672  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:05:55.962775  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:05:55.963179  952029 node_ready.go:35] waiting up to 6m0s for node "addons-677681" to be "Ready" ...
	I0314 18:05:55.971636  952029 node_ready.go:49] node "addons-677681" has status "Ready":"True"
	I0314 18:05:55.971661  952029 node_ready.go:38] duration metric: took 8.451915ms for node "addons-677681" to be "Ready" ...
	I0314 18:05:55.971670  952029 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:05:55.985497  952029 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-2jqbd" in "kube-system" namespace to be "Ready" ...
	I0314 18:05:56.467706  952029 kapi.go:248] "coredns" deployment in "kube-system" namespace and "addons-677681" context rescaled to 1 replicas
	I0314 18:05:56.937981  952029 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_application_credentials.json (162 bytes)
	I0314 18:05:56.938027  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:56.941506  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:56.942012  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:56.942043  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:56.942343  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:56.942534  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:56.942741  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:56.942972  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:57.606136  952029 ssh_runner.go:362] scp memory --> /var/lib/minikube/google_cloud_project (12 bytes)
	I0314 18:05:57.742192  952029 addons.go:234] Setting addon gcp-auth=true in "addons-677681"
	I0314 18:05:57.742250  952029 host.go:66] Checking if "addons-677681" exists ...
	I0314 18:05:57.742595  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:57.742628  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:57.780921  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42637
	I0314 18:05:57.781393  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:57.782105  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:57.782131  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:57.782488  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:57.783028  952029 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:05:57.783055  952029 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:05:57.798961  952029 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46307
	I0314 18:05:57.799553  952029 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:05:57.800023  952029 main.go:141] libmachine: Using API Version  1
	I0314 18:05:57.800044  952029 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:05:57.800398  952029 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:05:57.800593  952029 main.go:141] libmachine: (addons-677681) Calling .GetState
	I0314 18:05:57.802084  952029 main.go:141] libmachine: (addons-677681) Calling .DriverName
	I0314 18:05:57.802312  952029 ssh_runner.go:195] Run: cat /var/lib/minikube/google_application_credentials.json
	I0314 18:05:57.802334  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHHostname
	I0314 18:05:57.805597  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:57.806056  952029 main.go:141] libmachine: (addons-677681) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:54:90:2b", ip: ""} in network mk-addons-677681: {Iface:virbr1 ExpiryTime:2024-03-14 19:05:10 +0000 UTC Type:0 Mac:52:54:00:54:90:2b Iaid: IPaddr:192.168.39.215 Prefix:24 Hostname:addons-677681 Clientid:01:52:54:00:54:90:2b}
	I0314 18:05:57.806086  952029 main.go:141] libmachine: (addons-677681) DBG | domain addons-677681 has defined IP address 192.168.39.215 and MAC address 52:54:00:54:90:2b in network mk-addons-677681
	I0314 18:05:57.806266  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHPort
	I0314 18:05:57.806471  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHKeyPath
	I0314 18:05:57.806653  952029 main.go:141] libmachine: (addons-677681) Calling .GetSSHUsername
	I0314 18:05:57.806810  952029 sshutil.go:53] new ssh client: &{IP:192.168.39.215 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/addons-677681/id_rsa Username:docker}
	I0314 18:05:58.062162  952029 pod_ready.go:102] pod "coredns-5dd5756b68-2jqbd" in "kube-system" namespace has status "Ready":"False"
	I0314 18:05:58.832136  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner-rancher.yaml: (7.927906316s)
	I0314 18:05:58.832203  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:05:58.832232  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:05:58.832550  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:05:58.832669  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:05:58.832689  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:05:58.832713  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:05:58.832751  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:05:58.832980  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:05:58.833002  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:05:58.846440  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (7.844373649s)
	I0314 18:05:58.846486  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:05:58.846536  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:05:58.846533  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-dns-pod.yaml: (7.795001015s)
	I0314 18:05:58.846586  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:05:58.846605  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:05:58.846942  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:05:58.846959  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:05:58.846969  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:05:58.846977  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:05:58.847009  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:05:58.847023  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:05:58.847034  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:05:58.847046  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:05:58.847180  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:05:58.847194  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:05:58.847312  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:05:58.847326  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:05:58.847351  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:05:59.281026  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:05:59.281064  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:05:59.281404  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:05:59.281461  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:05:59.281409  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.545822  952029 pod_ready.go:102] pod "coredns-5dd5756b68-2jqbd" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:00.912970  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ingress-deploy.yaml: (9.845781375s)
	I0314 18:06:00.913026  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.913038  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.913058  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/registry-rc.yaml -f /etc/kubernetes/addons/registry-svc.yaml -f /etc/kubernetes/addons/registry-proxy.yaml: (9.781735518s)
	I0314 18:06:00.913112  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.913136  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.913170  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (9.75352537s)
	I0314 18:06:00.913227  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/helm-tiller-dp.yaml -f /etc/kubernetes/addons/helm-tiller-rbac.yaml -f /etc/kubernetes/addons/helm-tiller-svc.yaml: (9.436644733s)
	I0314 18:06:00.913270  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.913302  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.913332  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (9.219037214s)
	I0314 18:06:00.913365  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.913377  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.913414  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/yakd-ns.yaml -f /etc/kubernetes/addons/yakd-sa.yaml -f /etc/kubernetes/addons/yakd-crb.yaml -f /etc/kubernetes/addons/yakd-svc.yaml -f /etc/kubernetes/addons/yakd-dp.yaml: (8.791265992s)
	I0314 18:06:00.913435  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.913446  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.913508  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.913521  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.913531  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.913535  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.913507  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.913560  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.913569  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.913575  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.913750  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.913796  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.913794  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.913816  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.913827  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.913842  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.913866  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.913894  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.913904  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.913912  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.913538  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.913930  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.913943  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (8.297838927s)
	I0314 18:06:00.913958  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.913966  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.913977  952029 addons.go:470] Verifying addon registry=true in "addons-677681"
	W0314 18:06:00.913977  952029 addons.go:452] apply failed, will retry: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0314 18:06:00.914005  952029 retry.go:31] will retry after 208.712657ms: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: Process exited with status 1
	stdout:
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotclasses.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshotcontents.snapshot.storage.k8s.io created
	customresourcedefinition.apiextensions.k8s.io/volumesnapshots.snapshot.storage.k8s.io created
	serviceaccount/snapshot-controller created
	clusterrole.rbac.authorization.k8s.io/snapshot-controller-runner created
	clusterrolebinding.rbac.authorization.k8s.io/snapshot-controller-role created
	role.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	rolebinding.rbac.authorization.k8s.io/snapshot-controller-leaderelection created
	deployment.apps/snapshot-controller created
	
	stderr:
	error: resource mapping not found for name: "csi-hostpath-snapclass" namespace: "" from "/etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml": no matches for kind "VolumeSnapshotClass" in version "snapshot.storage.k8s.io/v1"
	ensure CRDs are installed first
	I0314 18:06:00.915634  952029 out.go:177] * Verifying registry addon...
	I0314 18:06:00.914045  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/ig-namespace.yaml -f /etc/kubernetes/addons/ig-serviceaccount.yaml -f /etc/kubernetes/addons/ig-role.yaml -f /etc/kubernetes/addons/ig-rolebinding.yaml -f /etc/kubernetes/addons/ig-clusterrole.yaml -f /etc/kubernetes/addons/ig-clusterrolebinding.yaml -f /etc/kubernetes/addons/ig-crd.yaml -f /etc/kubernetes/addons/ig-daemonset.yaml: (7.104147477s)
	I0314 18:06:00.914345  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.914362  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.914388  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.914406  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.914473  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.914579  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.914730  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.916545  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.917355  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.917368  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.917384  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.917432  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.917441  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.917445  952029 addons.go:470] Verifying addon metrics-server=true in "addons-677681"
	I0314 18:06:00.917451  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.917460  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.917494  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.917503  952029 addons.go:470] Verifying addon ingress=true in "addons-677681"
	I0314 18:06:00.918905  952029 out.go:177] * Verifying ingress addon...
	I0314 18:06:00.917625  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.917662  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.917699  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.917797  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.917822  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.918228  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.918253  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.918347  952029 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=registry" in ns "kube-system" ...
	I0314 18:06:00.920494  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.920502  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.920513  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.920516  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.920523  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.920525  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.920500  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.922192  952029 out.go:177] * To access YAKD - Kubernetes Dashboard, wait for Pod to be ready and run the following command:
	
		minikube -p addons-677681 service yakd-dashboard -n yakd-dashboard
	
	I0314 18:06:00.920769  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.920770  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.920918  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.920947  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:00.921751  952029 kapi.go:75] Waiting for pod with label "app.kubernetes.io/name=ingress-nginx" in ns "ingress-nginx" ...
	I0314 18:06:00.923618  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.923673  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.938238  952029 kapi.go:86] Found 2 Pods for label selector kubernetes.io/minikube-addons=registry
	I0314 18:06:00.938262  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:00.945813  952029 kapi.go:86] Found 3 Pods for label selector app.kubernetes.io/name=ingress-nginx
	I0314 18:06:00.945840  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:00.979423  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:00.979449  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:00.979835  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:00.979862  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:00.979871  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:01.123711  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml
	I0314 18:06:01.432370  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:01.436836  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:01.955510  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:01.961888  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:02.435369  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:02.445363  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:02.942608  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:02.942618  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:03.002133  952029 pod_ready.go:102] pod "coredns-5dd5756b68-2jqbd" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:03.467455  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:03.468008  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:03.949337  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:03.957664  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/rbac-external-attacher.yaml -f /etc/kubernetes/addons/rbac-hostpath.yaml -f /etc/kubernetes/addons/rbac-external-health-monitor-controller.yaml -f /etc/kubernetes/addons/rbac-external-provisioner.yaml -f /etc/kubernetes/addons/rbac-external-resizer.yaml -f /etc/kubernetes/addons/rbac-external-snapshotter.yaml -f /etc/kubernetes/addons/csi-hostpath-attacher.yaml -f /etc/kubernetes/addons/csi-hostpath-driverinfo.yaml -f /etc/kubernetes/addons/csi-hostpath-plugin.yaml -f /etc/kubernetes/addons/csi-hostpath-resizer.yaml -f /etc/kubernetes/addons/csi-hostpath-storageclass.yaml: (9.469692664s)
	I0314 18:06:03.957726  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:03.957740  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:03.957738  952029 ssh_runner.go:235] Completed: cat /var/lib/minikube/google_application_credentials.json: (6.155398901s)
	I0314 18:06:03.960074  952029 out.go:177]   - Using image registry.k8s.io/ingress-nginx/kube-webhook-certgen:v1.4.0
	I0314 18:06:03.958129  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:03.958167  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:03.961956  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:03.961969  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:03.961987  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:03.963929  952029 out.go:177]   - Using image gcr.io/k8s-minikube/gcp-auth-webhook:v0.1.2
	I0314 18:06:03.962327  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:03.962366  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:03.965185  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:03.965216  952029 addons.go:470] Verifying addon csi-hostpath-driver=true in "addons-677681"
	I0314 18:06:03.966647  952029 out.go:177] * Verifying csi-hostpath-driver addon...
	I0314 18:06:03.965216  952029 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-ns.yaml
	I0314 18:06:03.968342  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-ns.yaml (700 bytes)
	I0314 18:06:03.969098  952029 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=csi-hostpath-driver" in ns "kube-system" ...
	I0314 18:06:04.037055  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:04.062756  952029 kapi.go:86] Found 3 Pods for label selector kubernetes.io/minikube-addons=csi-hostpath-driver
	I0314 18:06:04.062792  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:04.087624  952029 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-service.yaml
	I0314 18:06:04.087657  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-service.yaml (788 bytes)
	I0314 18:06:04.229156  952029 addons.go:426] installing /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0314 18:06:04.229187  952029 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/gcp-auth-webhook.yaml (5421 bytes)
	I0314 18:06:04.259030  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply --force -f /etc/kubernetes/addons/csi-hostpath-snapshotclass.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotclasses.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshotcontents.yaml -f /etc/kubernetes/addons/snapshot.storage.k8s.io_volumesnapshots.yaml -f /etc/kubernetes/addons/rbac-volume-snapshot-controller.yaml -f /etc/kubernetes/addons/volume-snapshot-controller-deployment.yaml: (3.135257957s)
	I0314 18:06:04.259098  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:04.259115  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:04.259425  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:04.259451  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:04.259462  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:04.259465  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:04.259471  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:04.259844  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:04.259856  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:04.259865  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:04.333002  952029 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml
	I0314 18:06:04.425188  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:04.428955  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:04.474813  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:04.926072  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:04.931629  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:04.976758  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:04.997443  952029 pod_ready.go:97] pod "coredns-5dd5756b68-2jqbd" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 18:05:50 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 18:05:50 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 18:05:50 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 18:05:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.215 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-14 18:05:50 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Running:nil Terminated:&ContainerS
tateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-14 18:05:53 +0000 UTC,FinishedAt:2024-03-14 18:06:03 +0000 UTC,ContainerID:cri-o://696da1fb0cb569166d3347afc4ea98bb09cde934eb8ceae4b1f1bc838b18bd5e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://696da1fb0cb569166d3347afc4ea98bb09cde934eb8ceae4b1f1bc838b18bd5e Started:0xc002dabab0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0314 18:06:04.997482  952029 pod_ready.go:81] duration metric: took 9.011955119s for pod "coredns-5dd5756b68-2jqbd" in "kube-system" namespace to be "Ready" ...
	E0314 18:06:04.997497  952029 pod_ready.go:66] WaitExtra: waitPodCondition: pod "coredns-5dd5756b68-2jqbd" in "kube-system" namespace has status phase "Succeeded" (skipping!): {Phase:Succeeded Conditions:[{Type:Initialized Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 18:05:50 +0000 UTC Reason:PodCompleted Message:} {Type:Ready Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 18:05:50 +0000 UTC Reason:PodCompleted Message:} {Type:ContainersReady Status:False LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 18:05:50 +0000 UTC Reason:PodCompleted Message:} {Type:PodScheduled Status:True LastProbeTime:0001-01-01 00:00:00 +0000 UTC LastTransitionTime:2024-03-14 18:05:50 +0000 UTC Reason: Message:}] Message: Reason: NominatedNodeName: HostIP:192.168.39.215 HostIPs:[] PodIP: PodIPs:[] StartTime:2024-03-14 18:05:50 +0000 UTC InitContainerStatuses:[] ContainerStatuses:[{Name:coredns State:{Waiting:nil Runni
ng:nil Terminated:&ContainerStateTerminated{ExitCode:0,Signal:0,Reason:Completed,Message:,StartedAt:2024-03-14 18:05:53 +0000 UTC,FinishedAt:2024-03-14 18:06:03 +0000 UTC,ContainerID:cri-o://696da1fb0cb569166d3347afc4ea98bb09cde934eb8ceae4b1f1bc838b18bd5e,}} LastTerminationState:{Waiting:nil Running:nil Terminated:nil} Ready:false RestartCount:0 Image:registry.k8s.io/coredns/coredns:v1.10.1 ImageID:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc ContainerID:cri-o://696da1fb0cb569166d3347afc4ea98bb09cde934eb8ceae4b1f1bc838b18bd5e Started:0xc002dabab0 AllocatedResources:map[] Resources:nil}] QOSClass:Burstable EphemeralContainerStatuses:[] Resize: ResourceClaimStatuses:[]}
	I0314 18:06:04.997508  952029 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:05.428734  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:05.435064  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:05.474740  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:05.764703  952029 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/gcp-auth-ns.yaml -f /etc/kubernetes/addons/gcp-auth-service.yaml -f /etc/kubernetes/addons/gcp-auth-webhook.yaml: (1.431647253s)
	I0314 18:06:05.764787  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:05.764810  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:05.765146  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:05.765167  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:05.765183  952029 main.go:141] libmachine: Making call to close driver server
	I0314 18:06:05.765192  952029 main.go:141] libmachine: (addons-677681) Calling .Close
	I0314 18:06:05.765606  952029 main.go:141] libmachine: (addons-677681) DBG | Closing plugin on server side
	I0314 18:06:05.765619  952029 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:06:05.765635  952029 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:06:05.766771  952029 addons.go:470] Verifying addon gcp-auth=true in "addons-677681"
	I0314 18:06:05.768465  952029 out.go:177] * Verifying gcp-auth addon...
	I0314 18:06:05.771160  952029 kapi.go:75] Waiting for pod with label "kubernetes.io/minikube-addons=gcp-auth" in ns "gcp-auth" ...
	I0314 18:06:05.789507  952029 kapi.go:86] Found 1 Pods for label selector kubernetes.io/minikube-addons=gcp-auth
	I0314 18:06:05.789526  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:05.927110  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:05.935680  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:05.975343  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:06.276078  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:06.425414  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:06.434286  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:06.475044  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:06.776469  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:06.926333  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:06.931182  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:06.974994  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:07.004948  952029 pod_ready.go:102] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:07.275972  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:07.427446  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:07.429096  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:07.655398  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:07.775380  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:07.925568  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:07.928898  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:07.979767  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:08.275675  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:08.425267  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:08.427911  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:08.480428  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:08.775849  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:08.925139  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:08.927440  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:08.974803  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:09.278681  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:09.428084  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:09.430367  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:09.477310  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:09.504470  952029 pod_ready.go:102] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:09.775143  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:09.928981  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:09.933196  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:09.976626  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:10.393156  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:10.426068  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:10.433538  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:10.474994  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:10.775413  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:10.926735  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:10.933724  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:10.975169  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:11.276753  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:11.425207  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:11.429482  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:11.477922  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:11.504829  952029 pod_ready.go:102] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:11.776907  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:11.926088  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:11.928759  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:11.974973  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:12.275477  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:12.426033  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:12.429887  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:12.475276  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:12.777731  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:12.928733  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:12.931565  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:12.976593  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:13.279240  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:13.428090  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:13.430145  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:13.474403  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:13.506603  952029 pod_ready.go:102] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:13.780414  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:13.932885  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:13.937727  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:13.978017  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:14.280668  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:14.428964  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:14.432167  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:14.479221  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:14.775882  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:14.926010  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:14.928356  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:14.975906  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:15.275668  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:15.425944  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:15.428574  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:15.479656  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:15.776128  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:15.926047  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:15.928135  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:15.975978  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:16.006916  952029 pod_ready.go:102] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:16.275697  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:16.426202  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:16.428320  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:16.475008  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:17.026003  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:17.026127  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:17.026639  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:17.028569  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:17.280919  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:17.427833  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:17.429442  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:17.475486  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:17.775017  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:17.925933  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:17.929324  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:17.976767  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:18.277419  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:18.426261  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:18.428412  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:18.474685  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:18.503039  952029 pod_ready.go:102] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:18.777195  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:18.925626  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:18.928832  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:18.975276  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:19.277375  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:19.427322  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:19.429580  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:19.474589  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:19.781013  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:19.927102  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:19.928608  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:19.975815  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:20.276586  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:20.428033  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:20.429162  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:20.476356  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:20.505162  952029 pod_ready.go:102] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:20.774898  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:20.930511  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:20.931156  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:20.978992  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:21.275697  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:21.428575  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:21.429722  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:21.474837  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:21.775815  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:21.925814  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:21.929270  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:21.975465  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:22.275805  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:22.426386  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:22.429278  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:22.475421  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:22.505819  952029 pod_ready.go:102] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:22.778396  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:22.926608  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:22.931320  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:22.974504  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:23.276869  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:23.425624  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:23.429327  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:23.475762  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:23.775542  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:23.926570  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:23.928978  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:23.976165  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:24.274929  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:24.426773  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:24.430224  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:24.477914  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:25.016025  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:25.016145  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:25.018550  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:25.019122  952029 pod_ready.go:102] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:25.019440  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:25.275707  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:25.425684  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:25.429549  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:25.474631  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:25.776567  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:25.926553  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:25.928901  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:25.975281  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:26.277111  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:26.426110  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:26.429006  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:26.476359  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:26.775737  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:26.926293  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:26.929270  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:26.975060  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:27.276138  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:27.426435  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:27.429723  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:27.483715  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:27.503581  952029 pod_ready.go:102] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:27.781421  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:27.928344  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:27.929997  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:27.974838  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:28.278844  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:28.437077  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:28.437137  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:28.477784  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:28.776854  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:28.926479  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:28.929654  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:28.975077  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:29.275370  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:29.426665  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:29.428912  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:29.476302  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:29.504732  952029 pod_ready.go:102] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"False"
	I0314 18:06:29.775764  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:29.925052  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:29.928134  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:29.975057  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:30.275327  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:30.426136  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:30.427990  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:30.475580  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:30.775671  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:30.925601  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:30.928843  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:30.975053  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:31.275411  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:31.427969  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:31.433438  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:31.476636  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:31.505819  952029 pod_ready.go:92] pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace has status "Ready":"True"
	I0314 18:06:31.505841  952029 pod_ready.go:81] duration metric: took 26.508319122s for pod "coredns-5dd5756b68-lh8vw" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.505851  952029 pod_ready.go:78] waiting up to 6m0s for pod "etcd-addons-677681" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.510985  952029 pod_ready.go:92] pod "etcd-addons-677681" in "kube-system" namespace has status "Ready":"True"
	I0314 18:06:31.511005  952029 pod_ready.go:81] duration metric: took 5.147143ms for pod "etcd-addons-677681" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.511016  952029 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-addons-677681" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.515672  952029 pod_ready.go:92] pod "kube-apiserver-addons-677681" in "kube-system" namespace has status "Ready":"True"
	I0314 18:06:31.515687  952029 pod_ready.go:81] duration metric: took 4.66424ms for pod "kube-apiserver-addons-677681" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.515695  952029 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-addons-677681" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.523002  952029 pod_ready.go:92] pod "kube-controller-manager-addons-677681" in "kube-system" namespace has status "Ready":"True"
	I0314 18:06:31.523021  952029 pod_ready.go:81] duration metric: took 7.319534ms for pod "kube-controller-manager-addons-677681" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.523047  952029 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-xgj2v" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.535138  952029 pod_ready.go:92] pod "kube-proxy-xgj2v" in "kube-system" namespace has status "Ready":"True"
	I0314 18:06:31.535157  952029 pod_ready.go:81] duration metric: took 12.104527ms for pod "kube-proxy-xgj2v" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.535167  952029 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-addons-677681" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.775703  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:31.902770  952029 pod_ready.go:92] pod "kube-scheduler-addons-677681" in "kube-system" namespace has status "Ready":"True"
	I0314 18:06:31.902798  952029 pod_ready.go:81] duration metric: took 367.624945ms for pod "kube-scheduler-addons-677681" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.902810  952029 pod_ready.go:78] waiting up to 6m0s for pod "nvidia-device-plugin-daemonset-22t8w" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:31.926518  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:31.928384  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:31.976147  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:32.275896  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:32.302859  952029 pod_ready.go:92] pod "nvidia-device-plugin-daemonset-22t8w" in "kube-system" namespace has status "Ready":"True"
	I0314 18:06:32.302881  952029 pod_ready.go:81] duration metric: took 400.063716ms for pod "nvidia-device-plugin-daemonset-22t8w" in "kube-system" namespace to be "Ready" ...
	I0314 18:06:32.302891  952029 pod_ready.go:38] duration metric: took 36.331203371s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:06:32.302911  952029 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:06:32.302974  952029 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:06:32.405571  952029 api_server.go:72] duration metric: took 42.231948123s to wait for apiserver process to appear ...
	I0314 18:06:32.405610  952029 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:06:32.405638  952029 api_server.go:253] Checking apiserver healthz at https://192.168.39.215:8443/healthz ...
	I0314 18:06:32.410919  952029 api_server.go:279] https://192.168.39.215:8443/healthz returned 200:
	ok
	I0314 18:06:32.412432  952029 api_server.go:141] control plane version: v1.28.4
	I0314 18:06:32.412457  952029 api_server.go:131] duration metric: took 6.840388ms to wait for apiserver health ...
	I0314 18:06:32.412466  952029 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:06:32.427384  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:32.429148  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:32.475695  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:32.509308  952029 system_pods.go:59] 18 kube-system pods found
	I0314 18:06:32.509363  952029 system_pods.go:61] "coredns-5dd5756b68-lh8vw" [51d39c48-8dda-4927-a694-630e52935d70] Running
	I0314 18:06:32.509372  952029 system_pods.go:61] "csi-hostpath-attacher-0" [269a6c32-fbb2-4874-8cc9-31fca4fb4541] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0314 18:06:32.509381  952029 system_pods.go:61] "csi-hostpath-resizer-0" [296b0d90-3221-46f2-bf9b-dc6301ef7ea5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0314 18:06:32.509393  952029 system_pods.go:61] "csi-hostpathplugin-drc84" [8eaca7a1-3978-4b9c-bd10-3238f6235de7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0314 18:06:32.509426  952029 system_pods.go:61] "etcd-addons-677681" [046a8136-2df4-4e28-a402-21246f8f3093] Running
	I0314 18:06:32.509435  952029 system_pods.go:61] "kube-apiserver-addons-677681" [7d15caae-cf2d-42bd-aa54-528e02673a7c] Running
	I0314 18:06:32.509439  952029 system_pods.go:61] "kube-controller-manager-addons-677681" [a9cd42d6-723b-4c5a-a74f-7e38ff8cc685] Running
	I0314 18:06:32.509444  952029 system_pods.go:61] "kube-ingress-dns-minikube" [afa358a2-8cac-48fc-a475-8fd228291c6f] Running
	I0314 18:06:32.509447  952029 system_pods.go:61] "kube-proxy-xgj2v" [9a57f545-88bc-4110-8da7-6028de02678b] Running
	I0314 18:06:32.509453  952029 system_pods.go:61] "kube-scheduler-addons-677681" [4a265aef-2aca-4132-85e6-2478cdcdaf3e] Running
	I0314 18:06:32.509456  952029 system_pods.go:61] "metrics-server-69cf46c98-7jkq5" [27d0af32-b416-4e1a-bb9c-df89c071e23b] Running
	I0314 18:06:32.509462  952029 system_pods.go:61] "nvidia-device-plugin-daemonset-22t8w" [c334025a-fbc2-4b2c-b4f0-421a2b1481ac] Running
	I0314 18:06:32.509470  952029 system_pods.go:61] "registry-proxy-l87zl" [5e0174a6-c5fb-448a-9a4a-caf4fa57b737] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0314 18:06:32.509477  952029 system_pods.go:61] "registry-pwdvn" [492bfa41-6a10-4828-9e01-3744a4cb381a] Running
	I0314 18:06:32.509491  952029 system_pods.go:61] "snapshot-controller-58dbcc7b99-ms5hb" [b5ec4128-57d5-4664-8dec-3b98abab0311] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0314 18:06:32.509503  952029 system_pods.go:61] "snapshot-controller-58dbcc7b99-njvrd" [a018922c-4b39-4bf8-9fea-f3198c1e65ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0314 18:06:32.509512  952029 system_pods.go:61] "storage-provisioner" [7cc11c88-3302-458c-82aa-7669097aeff0] Running
	I0314 18:06:32.509516  952029 system_pods.go:61] "tiller-deploy-7b677967b9-pghvn" [3d96f26e-707b-4ed4-8262-d94ea4378716] Running
	I0314 18:06:32.509523  952029 system_pods.go:74] duration metric: took 97.05346ms to wait for pod list to return data ...
	I0314 18:06:32.509534  952029 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:06:32.703236  952029 default_sa.go:45] found service account: "default"
	I0314 18:06:32.703264  952029 default_sa.go:55] duration metric: took 193.724021ms for default service account to be created ...
	I0314 18:06:32.703284  952029 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:06:32.777123  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:32.909720  952029 system_pods.go:86] 18 kube-system pods found
	I0314 18:06:32.909750  952029 system_pods.go:89] "coredns-5dd5756b68-lh8vw" [51d39c48-8dda-4927-a694-630e52935d70] Running
	I0314 18:06:32.909760  952029 system_pods.go:89] "csi-hostpath-attacher-0" [269a6c32-fbb2-4874-8cc9-31fca4fb4541] Pending / Ready:ContainersNotReady (containers with unready status: [csi-attacher]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-attacher])
	I0314 18:06:32.909770  952029 system_pods.go:89] "csi-hostpath-resizer-0" [296b0d90-3221-46f2-bf9b-dc6301ef7ea5] Pending / Ready:ContainersNotReady (containers with unready status: [csi-resizer]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-resizer])
	I0314 18:06:32.909781  952029 system_pods.go:89] "csi-hostpathplugin-drc84" [8eaca7a1-3978-4b9c-bd10-3238f6235de7] Pending / Ready:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter]) / ContainersReady:ContainersNotReady (containers with unready status: [csi-external-health-monitor-controller node-driver-registrar hostpath liveness-probe csi-provisioner csi-snapshotter])
	I0314 18:06:32.909789  952029 system_pods.go:89] "etcd-addons-677681" [046a8136-2df4-4e28-a402-21246f8f3093] Running
	I0314 18:06:32.909795  952029 system_pods.go:89] "kube-apiserver-addons-677681" [7d15caae-cf2d-42bd-aa54-528e02673a7c] Running
	I0314 18:06:32.909801  952029 system_pods.go:89] "kube-controller-manager-addons-677681" [a9cd42d6-723b-4c5a-a74f-7e38ff8cc685] Running
	I0314 18:06:32.909807  952029 system_pods.go:89] "kube-ingress-dns-minikube" [afa358a2-8cac-48fc-a475-8fd228291c6f] Running
	I0314 18:06:32.909812  952029 system_pods.go:89] "kube-proxy-xgj2v" [9a57f545-88bc-4110-8da7-6028de02678b] Running
	I0314 18:06:32.909819  952029 system_pods.go:89] "kube-scheduler-addons-677681" [4a265aef-2aca-4132-85e6-2478cdcdaf3e] Running
	I0314 18:06:32.909825  952029 system_pods.go:89] "metrics-server-69cf46c98-7jkq5" [27d0af32-b416-4e1a-bb9c-df89c071e23b] Running
	I0314 18:06:32.909831  952029 system_pods.go:89] "nvidia-device-plugin-daemonset-22t8w" [c334025a-fbc2-4b2c-b4f0-421a2b1481ac] Running
	I0314 18:06:32.909846  952029 system_pods.go:89] "registry-proxy-l87zl" [5e0174a6-c5fb-448a-9a4a-caf4fa57b737] Pending / Ready:ContainersNotReady (containers with unready status: [registry-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [registry-proxy])
	I0314 18:06:32.909850  952029 system_pods.go:89] "registry-pwdvn" [492bfa41-6a10-4828-9e01-3744a4cb381a] Running
	I0314 18:06:32.909857  952029 system_pods.go:89] "snapshot-controller-58dbcc7b99-ms5hb" [b5ec4128-57d5-4664-8dec-3b98abab0311] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0314 18:06:32.909871  952029 system_pods.go:89] "snapshot-controller-58dbcc7b99-njvrd" [a018922c-4b39-4bf8-9fea-f3198c1e65ab] Pending / Ready:ContainersNotReady (containers with unready status: [volume-snapshot-controller]) / ContainersReady:ContainersNotReady (containers with unready status: [volume-snapshot-controller])
	I0314 18:06:32.909875  952029 system_pods.go:89] "storage-provisioner" [7cc11c88-3302-458c-82aa-7669097aeff0] Running
	I0314 18:06:32.909879  952029 system_pods.go:89] "tiller-deploy-7b677967b9-pghvn" [3d96f26e-707b-4ed4-8262-d94ea4378716] Running
	I0314 18:06:32.909886  952029 system_pods.go:126] duration metric: took 206.596799ms to wait for k8s-apps to be running ...
	I0314 18:06:32.909899  952029 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:06:32.909961  952029 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:06:32.925664  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:32.928581  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:32.938500  952029 system_svc.go:56] duration metric: took 28.593031ms WaitForService to wait for kubelet
	I0314 18:06:32.938525  952029 kubeadm.go:576] duration metric: took 42.764912931s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:06:32.938549  952029 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:06:32.975341  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:33.104814  952029 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:06:33.104863  952029 node_conditions.go:123] node cpu capacity is 2
	I0314 18:06:33.104894  952029 node_conditions.go:105] duration metric: took 166.340207ms to run NodePressure ...
	I0314 18:06:33.104922  952029 start.go:240] waiting for startup goroutines ...
	I0314 18:06:33.275549  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:33.426193  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:33.428941  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:33.475067  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:33.776164  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:33.933163  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=registry", current state: Pending: [<nil>]
	I0314 18:06:33.934495  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:33.976160  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:34.275682  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:34.427554  952029 kapi.go:107] duration metric: took 33.509201635s to wait for kubernetes.io/minikube-addons=registry ...
	I0314 18:06:34.429671  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:34.475435  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:34.778366  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:34.929172  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:34.975265  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:35.384534  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:35.432163  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:35.476982  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:35.775695  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:35.929654  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:35.975105  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:36.275170  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:36.428484  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:36.475051  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:36.776074  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:36.928637  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:36.975778  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:37.278038  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:37.428573  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:37.476446  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:37.775059  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:37.930854  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:37.974548  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:38.276702  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:38.428435  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:38.475287  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:38.775375  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:38.933576  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:38.980141  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:39.274732  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:39.432680  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:39.475710  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:39.775858  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:39.929697  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:39.977970  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:40.275275  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:40.430468  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:40.480967  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:40.775054  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:40.928515  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:41.000545  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:41.276708  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:41.434299  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:41.476443  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:41.775752  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:41.929181  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:41.975529  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:42.277529  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:42.429558  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:42.475887  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:42.774877  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:42.928960  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:42.976564  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:43.283185  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:43.430120  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:43.475011  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:43.775523  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:43.929682  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:43.976608  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:44.275820  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:44.429580  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:44.475216  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:44.776528  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:44.928858  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:44.975543  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:45.284138  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:45.428825  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:45.474903  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:46.067822  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:46.068680  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:46.072619  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:46.284735  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:46.431514  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:46.476297  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:46.774989  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:46.929759  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:46.978319  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:47.275786  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:47.428564  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:47.477379  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:47.776230  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:47.928704  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:47.978054  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:48.275059  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:48.428550  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:48.475845  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:48.775928  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:48.928484  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:48.975358  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:49.277303  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:49.428865  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:49.475902  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:49.775184  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:49.929436  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:49.979225  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:50.278264  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:50.430514  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:50.518723  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:50.776663  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:50.929284  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:50.974536  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:51.278682  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:51.429975  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:51.475329  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:51.783021  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:51.928654  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:51.980882  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:52.274916  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:52.428373  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:52.477781  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:52.779785  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:52.928182  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:52.975351  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:53.281220  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:53.429138  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:53.475055  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:53.775707  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:53.929532  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:53.979926  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:54.275734  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:54.430210  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:54.475601  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:54.775763  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:54.929546  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:54.975272  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:55.274972  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:55.429090  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:55.480472  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:55.774896  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:55.930518  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:55.982027  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:56.278351  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:56.428921  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:56.475784  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:56.775183  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:56.928623  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:56.976299  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:57.275350  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:57.432482  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:57.475529  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:57.786659  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:57.928865  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:57.975319  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:58.278475  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:58.430042  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:58.475115  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:58.775059  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:58.928401  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:58.977635  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:59.275107  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:59.428833  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:59.474355  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:06:59.782517  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:06:59.929860  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:06:59.976757  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:07:00.275903  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:00.429281  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:00.474959  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:07:00.774697  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:00.930233  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:00.975024  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:07:01.274935  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:01.427843  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:01.482741  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:07:01.775223  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:02.270360  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:02.273483  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:07:02.275793  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:02.428699  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:02.480301  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:07:02.775283  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:02.929504  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:02.980159  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:07:03.276820  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:03.429828  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:03.475349  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:07:03.775706  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:03.930676  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:03.976415  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:07:04.275645  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:04.428453  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:04.475564  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=csi-hostpath-driver", current state: Pending: [<nil>]
	I0314 18:07:04.776546  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:04.929073  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:04.977675  952029 kapi.go:107] duration metric: took 1m1.008572875s to wait for kubernetes.io/minikube-addons=csi-hostpath-driver ...
	I0314 18:07:05.275448  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:05.429269  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:05.776255  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:05.929160  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:06.275528  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:06.429085  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:06.774807  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:06.928868  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:07.275440  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:07.428700  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:07.775372  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:07.928806  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:08.276300  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:08.740028  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:08.776635  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:08.928684  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:09.275265  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:09.429102  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:09.775401  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:09.930569  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:10.275678  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:10.428996  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:10.775748  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:10.930044  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:11.631091  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:11.634396  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:11.776071  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:11.930617  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:12.278198  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:12.428860  952029 kapi.go:96] waiting for pod "app.kubernetes.io/name=ingress-nginx", current state: Pending: [<nil>]
	I0314 18:07:12.776159  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:12.930373  952029 kapi.go:107] duration metric: took 1m12.008616522s to wait for app.kubernetes.io/name=ingress-nginx ...
	I0314 18:07:13.275522  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:13.776277  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:14.275043  952029 kapi.go:96] waiting for pod "kubernetes.io/minikube-addons=gcp-auth", current state: Pending: [<nil>]
	I0314 18:07:14.775780  952029 kapi.go:107] duration metric: took 1m9.004614098s to wait for kubernetes.io/minikube-addons=gcp-auth ...
	I0314 18:07:14.777854  952029 out.go:177] * Your GCP credentials will now be mounted into every pod created in the addons-677681 cluster.
	I0314 18:07:14.779549  952029 out.go:177] * If you don't want your credentials mounted into a specific pod, add a label with the `gcp-auth-skip-secret` key to your pod configuration.
	I0314 18:07:14.781239  952029 out.go:177] * If you want existing pods to be mounted with credentials, either recreate them or rerun addons enable with --refresh.
	I0314 18:07:14.782687  952029 out.go:177] * Enabled addons: cloud-spanner, nvidia-device-plugin, storage-provisioner, ingress-dns, storage-provisioner-rancher, metrics-server, helm-tiller, yakd, inspektor-gadget, default-storageclass, volumesnapshots, registry, csi-hostpath-driver, ingress, gcp-auth
	I0314 18:07:14.783892  952029 addons.go:505] duration metric: took 1m24.610243684s for enable addons: enabled=[cloud-spanner nvidia-device-plugin storage-provisioner ingress-dns storage-provisioner-rancher metrics-server helm-tiller yakd inspektor-gadget default-storageclass volumesnapshots registry csi-hostpath-driver ingress gcp-auth]
	I0314 18:07:14.783928  952029 start.go:245] waiting for cluster config update ...
	I0314 18:07:14.783944  952029 start.go:254] writing updated cluster config ...
	I0314 18:07:14.784289  952029 ssh_runner.go:195] Run: rm -f paused
	I0314 18:07:14.838467  952029 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 18:07:14.840955  952029 out.go:177] * Done! kubectl is now configured to use "addons-677681" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.799054773Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cd397caf-97cc-4bb2-99ab-dcced88df6ce name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.799390917Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36e08d1196793f926feea5bc5dc2052a60184c5435606c435bc2f68a54702567,PodSandboxId:77656f285938d434d917bbc47f2878c29a317e519748c82051fcdfc287cbb9ce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710439809543332757,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-c9vzt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be7db71b-156d-4594-a59d-837db6684f84,},Annotations:map[string]string{io.kubernetes.container.hash: 31da147b,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8975d75c1e61a1b2be5a5d102497ecf5599a7377cdcc9ecdedbbe73ecd5f89cb,PodSandboxId:de28b5daac70e06237729faf53894daeea45abe36b986b060663a53003ebd76c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710439673726378794,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-lnjcc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e2aaa66f-7ab3-4b99-9e5f-2b0278353e6c,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 88a0febe,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d4e39ad5438932559c28efe0363f5b5c9d1493f0766f464cc874b78fe7c08e,PodSandboxId:49f8c864d0084e7aad4a7bf036ca1e2fd24e187889115f0e0014483457f70551,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710439668205624802,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 643944af-9748-4c53-a7ef-8d5ac13c429c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d42b57b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00106371aee10523481f63f0ae9c4b7053e01868d286f26c7bd2ef50aadc0b78,PodSandboxId:bafdbfe51571f76e0957dd3223a51f6cfc78251112b923464c2bb76a2ab55239,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710439634056139983,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-56zbn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 695d7638-01fd-4b91-8f96-dc32a597877d,},Annotations:map[string]string{io.kubernetes.container.hash: fe79a624,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23aff411daf46f990a9b6ace7fc2120de0525838c1f8361ff773b00db280428,PodSandboxId:e212cc7a9482e674538c3aac5489b8ac052c85fe3f6f31e955b68bdddafcd369,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710439609920606025,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wmndh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b329d666-1dc2-4cdd-8739-e5edd448c7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 8d0ee242,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e639a2ce809bbb5aea4c3f6929aaf841fd308b646a5482145b423a1df94a9a,PodSandboxId:fa647de68607a6ee0c26274561f0ea2c5a01d48f87603d765c485701726e0e26,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710439603159330646,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dcpl9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 06bd68c5-846d-49d7-a756-b59926e729d9,},Annotations:map[string]string{io.kubernetes.container.hash: 8186e77a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a2b79332d3cc8a0bb330fa15e2fd1c854e203091acdc8260753931382ca64a,PodSandboxId:acb0b4f28932e694b901c56149f04bb7ce11a7a6334db58db47a0353a084ef40,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710439597530396316,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-xg4nt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 0f6ed210-68d9-4437-b638-6c36d1c56f27,},Annotations:map[string]string{io.kubernetes.container.hash: 3e90f918,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c15ea11c647af8023beb566338ff60ab8fb476f2d1cd0e951741c244c4b0a76,PodSandboxId:785bf57227aeb2c8eb172269779b8b88999ed3db5d30e4e7e50fd8c9a00558d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710439562703546702,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc11c88-3302-458c-82aa-7669097aeff0,},Annotations:map[string]string{io.kubernetes.container.hash: 705dc359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a73d2173ca92c652c54af4baa3b4ca8c40ed6f1a2e2141b80e55c71e74116095,PodSandboxId:b3a3b8d59370c9b60691c23bc4fbef095e304e6bbe011d15f653bbf9109996ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710439552975297608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-lh8vw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d39c48-8dda-4927-a694-630e52935d70,},Annotations:map[string]string{io.kubernetes.container.hash: 3f24180a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4d2e14daee94ecc03ba7b53b3fd41e7c3cbdb1f517d95bab65d60ebcc46942,PodSandboxId:54905f40e891cd63e893720ff03d9cffef020a3e3d060d503f7e6d159d58e3
69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710439552023730770,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgj2v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a57f545-88bc-4110-8da7-6028de02678b,},Annotations:map[string]string{io.kubernetes.container.hash: 54af0b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993d781fd1255b5ffa919691d2bdb5293f87f6082aba5ab9670c8664ec5541f9,PodSandboxId:ea3a566870f71ea6e06c1187c629943a055d93988d9f43f1be3a40c629205ef6,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710439531261766307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16752ae5f803f36cc48ce3618d6b6279,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42145039a1927b2cc13a463a77e6364a1129ea2ab53423ec92d321fc3890a6b4,PodSandboxId:f1e5b7d021edfde5e52348157808b92b223955b640ca8a1a3204c35bf969fc88,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710439531288261798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a2ea85b2f936d4608e3d7888efd8da,},Annotations:map[string]string{io.kubernetes.container.hash: 3f3ec927,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cbf041a827d5e777fa2bb34308264337086746a0d0826f0c7a3486ba6568d3a,PodSandboxId:f6cb6b1eef02fc3ed9dd2a2c711b425ec558e842d0a7c5e9344c245af5cc8969,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d0
58aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710439531242612762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd5f72dfd6f80611c1ad71ac2fee9d7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085f15909479b24319d72c5103b206e0341aa88aab190e6ec8c1911c08dd1d37,PodSandboxId:00f3ff97e2bc2353011b90d3bd506fc1f182f4fa4c16feaa0a4d9e33974d9a2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710439531239916058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e002bd628e1ba5a39cd67fdddb0688,},Annotations:map[string]string{io.kubernetes.container.hash: 24bbd196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=cd397caf-97cc-4bb2-99ab-dcced88df6ce name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.840809127Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4eddfb07-49b9-4278-96a5-da3e7e08c273 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.840878239Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4eddfb07-49b9-4278-96a5-da3e7e08c273 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.842734806Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d62a2089-1600-4aaa-9856-559a2062beaa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.843977241Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710439817843950355,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d62a2089-1600-4aaa-9856-559a2062beaa name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.844847552Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ade9eb1-de65-4020-affd-0c27570e203a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.844900254Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ade9eb1-de65-4020-affd-0c27570e203a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.845248725Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36e08d1196793f926feea5bc5dc2052a60184c5435606c435bc2f68a54702567,PodSandboxId:77656f285938d434d917bbc47f2878c29a317e519748c82051fcdfc287cbb9ce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710439809543332757,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-c9vzt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be7db71b-156d-4594-a59d-837db6684f84,},Annotations:map[string]string{io.kubernetes.container.hash: 31da147b,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8975d75c1e61a1b2be5a5d102497ecf5599a7377cdcc9ecdedbbe73ecd5f89cb,PodSandboxId:de28b5daac70e06237729faf53894daeea45abe36b986b060663a53003ebd76c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710439673726378794,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-lnjcc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e2aaa66f-7ab3-4b99-9e5f-2b0278353e6c,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 88a0febe,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d4e39ad5438932559c28efe0363f5b5c9d1493f0766f464cc874b78fe7c08e,PodSandboxId:49f8c864d0084e7aad4a7bf036ca1e2fd24e187889115f0e0014483457f70551,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710439668205624802,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 643944af-9748-4c53-a7ef-8d5ac13c429c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d42b57b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00106371aee10523481f63f0ae9c4b7053e01868d286f26c7bd2ef50aadc0b78,PodSandboxId:bafdbfe51571f76e0957dd3223a51f6cfc78251112b923464c2bb76a2ab55239,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710439634056139983,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-56zbn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 695d7638-01fd-4b91-8f96-dc32a597877d,},Annotations:map[string]string{io.kubernetes.container.hash: fe79a624,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23aff411daf46f990a9b6ace7fc2120de0525838c1f8361ff773b00db280428,PodSandboxId:e212cc7a9482e674538c3aac5489b8ac052c85fe3f6f31e955b68bdddafcd369,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710439609920606025,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wmndh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b329d666-1dc2-4cdd-8739-e5edd448c7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 8d0ee242,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e639a2ce809bbb5aea4c3f6929aaf841fd308b646a5482145b423a1df94a9a,PodSandboxId:fa647de68607a6ee0c26274561f0ea2c5a01d48f87603d765c485701726e0e26,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710439603159330646,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dcpl9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 06bd68c5-846d-49d7-a756-b59926e729d9,},Annotations:map[string]string{io.kubernetes.container.hash: 8186e77a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a2b79332d3cc8a0bb330fa15e2fd1c854e203091acdc8260753931382ca64a,PodSandboxId:acb0b4f28932e694b901c56149f04bb7ce11a7a6334db58db47a0353a084ef40,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710439597530396316,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-xg4nt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 0f6ed210-68d9-4437-b638-6c36d1c56f27,},Annotations:map[string]string{io.kubernetes.container.hash: 3e90f918,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c15ea11c647af8023beb566338ff60ab8fb476f2d1cd0e951741c244c4b0a76,PodSandboxId:785bf57227aeb2c8eb172269779b8b88999ed3db5d30e4e7e50fd8c9a00558d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710439562703546702,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc11c88-3302-458c-82aa-7669097aeff0,},Annotations:map[string]string{io.kubernetes.container.hash: 705dc359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a73d2173ca92c652c54af4baa3b4ca8c40ed6f1a2e2141b80e55c71e74116095,PodSandboxId:b3a3b8d59370c9b60691c23bc4fbef095e304e6bbe011d15f653bbf9109996ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710439552975297608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-lh8vw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d39c48-8dda-4927-a694-630e52935d70,},Annotations:map[string]string{io.kubernetes.container.hash: 3f24180a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4d2e14daee94ecc03ba7b53b3fd41e7c3cbdb1f517d95bab65d60ebcc46942,PodSandboxId:54905f40e891cd63e893720ff03d9cffef020a3e3d060d503f7e6d159d58e3
69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710439552023730770,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgj2v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a57f545-88bc-4110-8da7-6028de02678b,},Annotations:map[string]string{io.kubernetes.container.hash: 54af0b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993d781fd1255b5ffa919691d2bdb5293f87f6082aba5ab9670c8664ec5541f9,PodSandboxId:ea3a566870f71ea6e06c1187c629943a055d93988d9f43f1be3a40c629205ef6,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710439531261766307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16752ae5f803f36cc48ce3618d6b6279,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42145039a1927b2cc13a463a77e6364a1129ea2ab53423ec92d321fc3890a6b4,PodSandboxId:f1e5b7d021edfde5e52348157808b92b223955b640ca8a1a3204c35bf969fc88,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710439531288261798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a2ea85b2f936d4608e3d7888efd8da,},Annotations:map[string]string{io.kubernetes.container.hash: 3f3ec927,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cbf041a827d5e777fa2bb34308264337086746a0d0826f0c7a3486ba6568d3a,PodSandboxId:f6cb6b1eef02fc3ed9dd2a2c711b425ec558e842d0a7c5e9344c245af5cc8969,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d0
58aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710439531242612762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd5f72dfd6f80611c1ad71ac2fee9d7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085f15909479b24319d72c5103b206e0341aa88aab190e6ec8c1911c08dd1d37,PodSandboxId:00f3ff97e2bc2353011b90d3bd506fc1f182f4fa4c16feaa0a4d9e33974d9a2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710439531239916058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e002bd628e1ba5a39cd67fdddb0688,},Annotations:map[string]string{io.kubernetes.container.hash: 24bbd196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1ade9eb1-de65-4020-affd-0c27570e203a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.883629848Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4e2efccb-345d-41ac-8d29-b655bdfd4438 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.883776241Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4e2efccb-345d-41ac-8d29-b655bdfd4438 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.884816840Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ceafff87-2db5-4dd5-a599-a7f8197bd818 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.885987894Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710439817885963031,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ceafff87-2db5-4dd5-a599-a7f8197bd818 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.886930175Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=858af913-a2e8-432a-8358-90a1645f88c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.887038594Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=858af913-a2e8-432a-8358-90a1645f88c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.887353236Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36e08d1196793f926feea5bc5dc2052a60184c5435606c435bc2f68a54702567,PodSandboxId:77656f285938d434d917bbc47f2878c29a317e519748c82051fcdfc287cbb9ce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710439809543332757,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-c9vzt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be7db71b-156d-4594-a59d-837db6684f84,},Annotations:map[string]string{io.kubernetes.container.hash: 31da147b,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8975d75c1e61a1b2be5a5d102497ecf5599a7377cdcc9ecdedbbe73ecd5f89cb,PodSandboxId:de28b5daac70e06237729faf53894daeea45abe36b986b060663a53003ebd76c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710439673726378794,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-lnjcc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e2aaa66f-7ab3-4b99-9e5f-2b0278353e6c,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 88a0febe,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d4e39ad5438932559c28efe0363f5b5c9d1493f0766f464cc874b78fe7c08e,PodSandboxId:49f8c864d0084e7aad4a7bf036ca1e2fd24e187889115f0e0014483457f70551,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710439668205624802,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 643944af-9748-4c53-a7ef-8d5ac13c429c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d42b57b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00106371aee10523481f63f0ae9c4b7053e01868d286f26c7bd2ef50aadc0b78,PodSandboxId:bafdbfe51571f76e0957dd3223a51f6cfc78251112b923464c2bb76a2ab55239,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710439634056139983,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-56zbn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 695d7638-01fd-4b91-8f96-dc32a597877d,},Annotations:map[string]string{io.kubernetes.container.hash: fe79a624,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23aff411daf46f990a9b6ace7fc2120de0525838c1f8361ff773b00db280428,PodSandboxId:e212cc7a9482e674538c3aac5489b8ac052c85fe3f6f31e955b68bdddafcd369,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710439609920606025,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wmndh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b329d666-1dc2-4cdd-8739-e5edd448c7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 8d0ee242,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e639a2ce809bbb5aea4c3f6929aaf841fd308b646a5482145b423a1df94a9a,PodSandboxId:fa647de68607a6ee0c26274561f0ea2c5a01d48f87603d765c485701726e0e26,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710439603159330646,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dcpl9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 06bd68c5-846d-49d7-a756-b59926e729d9,},Annotations:map[string]string{io.kubernetes.container.hash: 8186e77a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a2b79332d3cc8a0bb330fa15e2fd1c854e203091acdc8260753931382ca64a,PodSandboxId:acb0b4f28932e694b901c56149f04bb7ce11a7a6334db58db47a0353a084ef40,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710439597530396316,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-xg4nt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 0f6ed210-68d9-4437-b638-6c36d1c56f27,},Annotations:map[string]string{io.kubernetes.container.hash: 3e90f918,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c15ea11c647af8023beb566338ff60ab8fb476f2d1cd0e951741c244c4b0a76,PodSandboxId:785bf57227aeb2c8eb172269779b8b88999ed3db5d30e4e7e50fd8c9a00558d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710439562703546702,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc11c88-3302-458c-82aa-7669097aeff0,},Annotations:map[string]string{io.kubernetes.container.hash: 705dc359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a73d2173ca92c652c54af4baa3b4ca8c40ed6f1a2e2141b80e55c71e74116095,PodSandboxId:b3a3b8d59370c9b60691c23bc4fbef095e304e6bbe011d15f653bbf9109996ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710439552975297608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-lh8vw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d39c48-8dda-4927-a694-630e52935d70,},Annotations:map[string]string{io.kubernetes.container.hash: 3f24180a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4d2e14daee94ecc03ba7b53b3fd41e7c3cbdb1f517d95bab65d60ebcc46942,PodSandboxId:54905f40e891cd63e893720ff03d9cffef020a3e3d060d503f7e6d159d58e3
69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710439552023730770,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgj2v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a57f545-88bc-4110-8da7-6028de02678b,},Annotations:map[string]string{io.kubernetes.container.hash: 54af0b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993d781fd1255b5ffa919691d2bdb5293f87f6082aba5ab9670c8664ec5541f9,PodSandboxId:ea3a566870f71ea6e06c1187c629943a055d93988d9f43f1be3a40c629205ef6,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710439531261766307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16752ae5f803f36cc48ce3618d6b6279,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42145039a1927b2cc13a463a77e6364a1129ea2ab53423ec92d321fc3890a6b4,PodSandboxId:f1e5b7d021edfde5e52348157808b92b223955b640ca8a1a3204c35bf969fc88,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710439531288261798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a2ea85b2f936d4608e3d7888efd8da,},Annotations:map[string]string{io.kubernetes.container.hash: 3f3ec927,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cbf041a827d5e777fa2bb34308264337086746a0d0826f0c7a3486ba6568d3a,PodSandboxId:f6cb6b1eef02fc3ed9dd2a2c711b425ec558e842d0a7c5e9344c245af5cc8969,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d0
58aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710439531242612762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd5f72dfd6f80611c1ad71ac2fee9d7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085f15909479b24319d72c5103b206e0341aa88aab190e6ec8c1911c08dd1d37,PodSandboxId:00f3ff97e2bc2353011b90d3bd506fc1f182f4fa4c16feaa0a4d9e33974d9a2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710439531239916058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e002bd628e1ba5a39cd67fdddb0688,},Annotations:map[string]string{io.kubernetes.container.hash: 24bbd196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=858af913-a2e8-432a-8358-90a1645f88c7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.917292801Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=c094c788-f55a-40f6-8b3f-89aa3ca270a7 name=/runtime.v1.RuntimeService/Status
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.917359884Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=c094c788-f55a-40f6-8b3f-89aa3ca270a7 name=/runtime.v1.RuntimeService/Status
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.933377248Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b81fde16-681b-4dc9-86fe-2a8f1e9c08cd name=/runtime.v1.RuntimeService/Version
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.933438451Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b81fde16-681b-4dc9-86fe-2a8f1e9c08cd name=/runtime.v1.RuntimeService/Version
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.935026728Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76ea320a-fa74-475f-b59a-fcbe306fea42 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.936411735Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710439817936382102,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:564136,},InodesUsed:&UInt64Value{Value:203,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76ea320a-fa74-475f-b59a-fcbe306fea42 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.937065566Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=453a9a79-2355-45d8-9a0b-fc6808326f5d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.937114428Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=453a9a79-2355-45d8-9a0b-fc6808326f5d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:10:17 addons-677681 crio[669]: time="2024-03-14 18:10:17.937581004Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:36e08d1196793f926feea5bc5dc2052a60184c5435606c435bc2f68a54702567,PodSandboxId:77656f285938d434d917bbc47f2878c29a317e519748c82051fcdfc287cbb9ce,Metadata:&ContainerMetadata{Name:hello-world-app,Attempt:0,},Image:&ImageSpec{Image:gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dd1b12fcb60978ac32686ef6732d56f612c8636ef86693c09613946a54c69d79,State:CONTAINER_RUNNING,CreatedAt:1710439809543332757,Labels:map[string]string{io.kubernetes.container.name: hello-world-app,io.kubernetes.pod.name: hello-world-app-5d77478584-c9vzt,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: be7db71b-156d-4594-a59d-837db6684f84,},Annotations:map[string]string{io.kubernetes.container.hash: 31da147b,io.kubernetes.containe
r.ports: [{\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8975d75c1e61a1b2be5a5d102497ecf5599a7377cdcc9ecdedbbe73ecd5f89cb,PodSandboxId:de28b5daac70e06237729faf53894daeea45abe36b986b060663a53003ebd76c,Metadata:&ContainerMetadata{Name:headlamp,Attempt:0,},Image:&ImageSpec{Image:ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:dfaa4a7414123ef23c2a89f87227d62b5ee118efc46f47647b2c9f77508e67b4,State:CONTAINER_RUNNING,CreatedAt:1710439673726378794,Labels:map[string]string{io.kubernetes.container.name: headlamp,io.kubernetes.pod.name: headlamp-5485c556b-lnjcc,io.kubernetes.pod.namespace: headlamp,io.kubernetes.pod.uid: e2aaa66f-7ab3-4b99-9e5f-2b0278353e6c,},Annotat
ions:map[string]string{io.kubernetes.container.hash: 88a0febe,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":4466,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:33d4e39ad5438932559c28efe0363f5b5c9d1493f0766f464cc874b78fe7c08e,PodSandboxId:49f8c864d0084e7aad4a7bf036ca1e2fd24e187889115f0e0014483457f70551,Metadata:&ContainerMetadata{Name:nginx,Attempt:0,},Image:&ImageSpec{Image:docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6913ed9ec8d009744018c1740879327fe2e085935b2cce7a234bf05347b670d7,State:CONTAINER_RUNNING,CreatedAt:1710439668205624802,Labels:map[string]string{io.kubernetes.container.name: nginx,io.kubernetes.pod.name: nginx,io.kubernetes.pod.namespace: default
,io.kubernetes.pod.uid: 643944af-9748-4c53-a7ef-8d5ac13c429c,},Annotations:map[string]string{io.kubernetes.container.hash: 1d42b57b,io.kubernetes.container.ports: [{\"containerPort\":80,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:00106371aee10523481f63f0ae9c4b7053e01868d286f26c7bd2ef50aadc0b78,PodSandboxId:bafdbfe51571f76e0957dd3223a51f6cfc78251112b923464c2bb76a2ab55239,Metadata:&ContainerMetadata{Name:gcp-auth,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:db2fc13d44d50b42f9eb2fbba7228784ce9600b2c9b06f94e7f38df6b0f7e522,State:CONTAINER_RUNNING,CreatedAt:1710439634056139983,Labels:map[string]string{io.kubernetes.container.name: gcp-au
th,io.kubernetes.pod.name: gcp-auth-7d69788767-56zbn,io.kubernetes.pod.namespace: gcp-auth,io.kubernetes.pod.uid: 695d7638-01fd-4b91-8f96-dc32a597877d,},Annotations:map[string]string{io.kubernetes.container.hash: fe79a624,io.kubernetes.container.ports: [{\"containerPort\":8443,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c23aff411daf46f990a9b6ace7fc2120de0525838c1f8361ff773b00db280428,PodSandboxId:e212cc7a9482e674538c3aac5489b8ac052c85fe3f6f31e955b68bdddafcd369,Metadata:&ContainerMetadata{Name:patch,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4a1009248b6f60a370895135,State:CONTAI
NER_EXITED,CreatedAt:1710439609920606025,Labels:map[string]string{io.kubernetes.container.name: patch,io.kubernetes.pod.name: ingress-nginx-admission-patch-wmndh,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: b329d666-1dc2-4cdd-8739-e5edd448c7b6,},Annotations:map[string]string{io.kubernetes.container.hash: 8d0ee242,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:68e639a2ce809bbb5aea4c3f6929aaf841fd308b646a5482145b423a1df94a9a,PodSandboxId:fa647de68607a6ee0c26274561f0ea2c5a01d48f87603d765c485701726e0e26,Metadata:&ContainerMetadata{Name:create,Attempt:0,},Image:&ImageSpec{Image:registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:b29d748098e32a42a2ac743679dd53501184ba9c4
a1009248b6f60a370895135,State:CONTAINER_EXITED,CreatedAt:1710439603159330646,Labels:map[string]string{io.kubernetes.container.name: create,io.kubernetes.pod.name: ingress-nginx-admission-create-dcpl9,io.kubernetes.pod.namespace: ingress-nginx,io.kubernetes.pod.uid: 06bd68c5-846d-49d7-a756-b59926e729d9,},Annotations:map[string]string{io.kubernetes.container.hash: 8186e77a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:23a2b79332d3cc8a0bb330fa15e2fd1c854e203091acdc8260753931382ca64a,PodSandboxId:acb0b4f28932e694b901c56149f04bb7ce11a7a6334db58db47a0353a084ef40,Metadata:&ContainerMetadata{Name:yakd,Attempt:0,},Image:&ImageSpec{Image:docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:31de47c733c918d8371361afabd259bf
b18f75409c61d94dce8151a83ee615a5,State:CONTAINER_RUNNING,CreatedAt:1710439597530396316,Labels:map[string]string{io.kubernetes.container.name: yakd,io.kubernetes.pod.name: yakd-dashboard-9947fc6bf-xg4nt,io.kubernetes.pod.namespace: yakd-dashboard,io.kubernetes.pod.uid: 0f6ed210-68d9-4437-b638-6c36d1c56f27,},Annotations:map[string]string{io.kubernetes.container.hash: 3e90f918,io.kubernetes.container.ports: [{\"name\":\"http\",\"containerPort\":8080,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3c15ea11c647af8023beb566338ff60ab8fb476f2d1cd0e951741c244c4b0a76,PodSandboxId:785bf57227aeb2c8eb172269779b8b88999ed3db5d30e4e7e50fd8c9a00558d8,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]stri
ng{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710439562703546702,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7cc11c88-3302-458c-82aa-7669097aeff0,},Annotations:map[string]string{io.kubernetes.container.hash: 705dc359,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a73d2173ca92c652c54af4baa3b4ca8c40ed6f1a2e2141b80e55c71e74116095,PodSandboxId:b3a3b8d59370c9b60691c23bc4fbef095e304e6bbe011d15f653bbf9109996ce,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:
,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710439552975297608,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-lh8vw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 51d39c48-8dda-4927-a694-630e52935d70,},Annotations:map[string]string{io.kubernetes.container.hash: 3f24180a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7d4d2e14daee94ecc03ba7b53b3fd41e7c3cbdb1f517d95bab65d60ebcc46942,PodSandboxId:54905f40e891cd63e893720ff03d9cffef020a3e3d060d503f7e6d159d58e3
69,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710439552023730770,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-xgj2v,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9a57f545-88bc-4110-8da7-6028de02678b,},Annotations:map[string]string{io.kubernetes.container.hash: 54af0b5d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:993d781fd1255b5ffa919691d2bdb5293f87f6082aba5ab9670c8664ec5541f9,PodSandboxId:ea3a566870f71ea6e06c1187c629943a055d93988d9f43f1be3a40c629205ef6,Metadata:&ContainerMetadata{Na
me:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710439531261766307,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 16752ae5f803f36cc48ce3618d6b6279,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:42145039a1927b2cc13a463a77e6364a1129ea2ab53423ec92d321fc3890a6b4,PodSandboxId:f1e5b7d021edfde5e52348157808b92b223955b640ca8a1a3204c35bf969fc88,Metadata:&ContainerMetadata{Name:etcd,Attempt:0
,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710439531288261798,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e3a2ea85b2f936d4608e3d7888efd8da,},Annotations:map[string]string{io.kubernetes.container.hash: 3f3ec927,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8cbf041a827d5e777fa2bb34308264337086746a0d0826f0c7a3486ba6568d3a,PodSandboxId:f6cb6b1eef02fc3ed9dd2a2c711b425ec558e842d0a7c5e9344c245af5cc8969,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d0
58aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710439531242612762,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8dd5f72dfd6f80611c1ad71ac2fee9d7,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:085f15909479b24319d72c5103b206e0341aa88aab190e6ec8c1911c08dd1d37,PodSandboxId:00f3ff97e2bc2353011b90d3bd506fc1f182f4fa4c16feaa0a4d9e33974d9a2d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7
fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710439531239916058,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-addons-677681,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c3e002bd628e1ba5a39cd67fdddb0688,},Annotations:map[string]string{io.kubernetes.container.hash: 24bbd196,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=453a9a79-2355-45d8-9a0b-fc6808326f5d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                                        CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	36e08d1196793       gcr.io/google-samples/hello-app@sha256:b1455e1c4fcc5ea1023c9e3b584cd84b64eb920e332feff690a2829696e379e7                      8 seconds ago       Running             hello-world-app           0                   77656f285938d       hello-world-app-5d77478584-c9vzt
	8975d75c1e61a       ghcr.io/headlamp-k8s/headlamp@sha256:19628eec9aaecf7944a049b6ab67f45e818365b9ec68cb7808ce1f6feb52d750                        2 minutes ago       Running             headlamp                  0                   de28b5daac70e       headlamp-5485c556b-lnjcc
	33d4e39ad5438       docker.io/library/nginx@sha256:6a2f8b28e45c4adea04ec207a251fd4a2df03ddc930f782af51e315ebc76e9a9                              2 minutes ago       Running             nginx                     0                   49f8c864d0084       nginx
	00106371aee10       gcr.io/k8s-minikube/gcp-auth-webhook@sha256:507b9d2f77a65700ff2462a02aa2c83780ff74ecb06c9275c5b5b9b1fa44269b                 3 minutes ago       Running             gcp-auth                  0                   bafdbfe51571f       gcp-auth-7d69788767-56zbn
	c23aff411daf4       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              patch                     0                   e212cc7a9482e       ingress-nginx-admission-patch-wmndh
	68e639a2ce809       registry.k8s.io/ingress-nginx/kube-webhook-certgen@sha256:261843b59d96d7e8a91e89545c7f27a066b1ab5cddbea8236cf1695c31889023   3 minutes ago       Exited              create                    0                   fa647de68607a       ingress-nginx-admission-create-dcpl9
	23a2b79332d3c       docker.io/marcnuri/yakd@sha256:a3f540278e4c11373e15605311851dd9c64d208f4d63e727bccc0e39f9329310                              3 minutes ago       Running             yakd                      0                   acb0b4f28932e       yakd-dashboard-9947fc6bf-xg4nt
	3c15ea11c647a       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                                             4 minutes ago       Running             storage-provisioner       0                   785bf57227aeb       storage-provisioner
	a73d2173ca92c       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                                             4 minutes ago       Running             coredns                   0                   b3a3b8d59370c       coredns-5dd5756b68-lh8vw
	7d4d2e14daee9       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                                             4 minutes ago       Running             kube-proxy                0                   54905f40e891c       kube-proxy-xgj2v
	42145039a1927       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                                             4 minutes ago       Running             etcd                      0                   f1e5b7d021edf       etcd-addons-677681
	993d781fd1255       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                                             4 minutes ago       Running             kube-scheduler            0                   ea3a566870f71       kube-scheduler-addons-677681
	8cbf041a827d5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                                             4 minutes ago       Running             kube-controller-manager   0                   f6cb6b1eef02f       kube-controller-manager-addons-677681
	085f15909479b       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                                             4 minutes ago       Running             kube-apiserver            0                   00f3ff97e2bc2       kube-apiserver-addons-677681
	
	
	==> coredns [a73d2173ca92c652c54af4baa3b4ca8c40ed6f1a2e2141b80e55c71e74116095] <==
	[INFO] 10.244.0.8:59605 - 64528 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.000108716s
	[INFO] 10.244.0.8:49029 - 37090 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000103951s
	[INFO] 10.244.0.8:49029 - 55009 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000085467s
	[INFO] 10.244.0.8:58128 - 49900 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.00009104s
	[INFO] 10.244.0.8:58128 - 746 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000079727s
	[INFO] 10.244.0.8:39411 - 20231 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.00014601s
	[INFO] 10.244.0.8:39411 - 6425 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000087451s
	[INFO] 10.244.0.8:59324 - 63575 "AAAA IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00005152s
	[INFO] 10.244.0.8:59324 - 43090 "A IN registry.kube-system.svc.cluster.local.kube-system.svc.cluster.local. udp 86 false 512" NXDOMAIN qr,aa,rd 179 0.00003846s
	[INFO] 10.244.0.8:33882 - 56556 "AAAA IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000030242s
	[INFO] 10.244.0.8:33882 - 64234 "A IN registry.kube-system.svc.cluster.local.svc.cluster.local. udp 74 false 512" NXDOMAIN qr,aa,rd 167 0.000028575s
	[INFO] 10.244.0.8:39314 - 7803 "A IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000027493s
	[INFO] 10.244.0.8:39314 - 57444 "AAAA IN registry.kube-system.svc.cluster.local.cluster.local. udp 70 false 512" NXDOMAIN qr,aa,rd 163 0.000036251s
	[INFO] 10.244.0.8:34799 - 43158 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000053445s
	[INFO] 10.244.0.8:34799 - 7573 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000037513s
	[INFO] 10.244.0.22:40779 - 33308 "A IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000450678s
	[INFO] 10.244.0.22:52904 - 6188 "AAAA IN storage.googleapis.com.gcp-auth.svc.cluster.local. udp 78 false 1232" NXDOMAIN qr,aa,rd 160 0.000128933s
	[INFO] 10.244.0.22:34414 - 17351 "AAAA IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.000136323s
	[INFO] 10.244.0.22:38683 - 19725 "A IN storage.googleapis.com.svc.cluster.local. udp 69 false 1232" NXDOMAIN qr,aa,rd 151 0.00006024s
	[INFO] 10.244.0.22:43370 - 64535 "AAAA IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000097591s
	[INFO] 10.244.0.22:49916 - 57794 "A IN storage.googleapis.com.cluster.local. udp 65 false 1232" NXDOMAIN qr,aa,rd 147 0.000275216s
	[INFO] 10.244.0.22:42194 - 46877 "AAAA IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 240 0.002834545s
	[INFO] 10.244.0.22:37064 - 63174 "A IN storage.googleapis.com. udp 51 false 1232" NOERROR qr,rd,ra 420 0.001244234s
	[INFO] 10.244.0.25:39678 - 2 "AAAA IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 149 0.000396901s
	[INFO] 10.244.0.25:42212 - 3 "A IN registry.kube-system.svc.cluster.local. udp 56 false 512" NOERROR qr,aa,rd 110 0.000123644s
	
	
	==> describe nodes <==
	Name:               addons-677681
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=addons-677681
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=addons-677681
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_05_37_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	                    topology.hostpath.csi/node=addons-677681
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:05:33 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  addons-677681
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:10:12 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:10:13 +0000   Thu, 14 Mar 2024 18:05:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:10:13 +0000   Thu, 14 Mar 2024 18:05:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:10:13 +0000   Thu, 14 Mar 2024 18:05:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:10:13 +0000   Thu, 14 Mar 2024 18:05:38 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.215
	  Hostname:    addons-677681
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             3912784Ki
	  pods:               110
	System Info:
	  Machine ID:                 2a04801147ec4ba483c226f87902d4c5
	  System UUID:                2a048011-47ec-4ba4-83c2-26f87902d4c5
	  Boot ID:                    8ee97c52-e567-4a12-a42e-b9cbf78a5695
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (12 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     hello-world-app-5d77478584-c9vzt         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11s
	  default                     nginx                                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  gcp-auth                    gcp-auth-7d69788767-56zbn                0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m13s
	  headlamp                    headlamp-5485c556b-lnjcc                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m33s
	  kube-system                 coredns-5dd5756b68-lh8vw                 100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (1%!)(MISSING)        170Mi (4%!)(MISSING)     4m28s
	  kube-system                 etcd-addons-677681                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (2%!)(MISSING)       0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-apiserver-addons-677681             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-controller-manager-addons-677681    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 kube-proxy-xgj2v                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m28s
	  kube-system                 kube-scheduler-addons-677681             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m41s
	  kube-system                 storage-provisioner                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m20s
	  yakd-dashboard              yakd-dashboard-9947fc6bf-xg4nt           0 (0%!)(MISSING)        0 (0%!)(MISSING)      128Mi (3%!)(MISSING)       256Mi (6%!)(MISSING)     4m20s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  0 (0%!)(MISSING)
	  memory             298Mi (7%!)(MISSING)  426Mi (11%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 4m24s  kube-proxy       
	  Normal  Starting                 4m41s  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  4m41s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  4m41s  kubelet          Node addons-677681 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    4m41s  kubelet          Node addons-677681 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     4m41s  kubelet          Node addons-677681 status is now: NodeHasSufficientPID
	  Normal  NodeReady                4m40s  kubelet          Node addons-677681 status is now: NodeReady
	  Normal  RegisteredNode           4m29s  node-controller  Node addons-677681 event: Registered Node addons-677681 in Controller
	
	
	==> dmesg <==
	[  +0.862850] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.903807] systemd-fstab-generator[1242]: Ignoring "noauto" option for root device
	[  +0.076707] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.258999] systemd-fstab-generator[1455]: Ignoring "noauto" option for root device
	[  +0.116758] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.117890] kauditd_printk_skb: 78 callbacks suppressed
	[  +5.025608] kauditd_printk_skb: 90 callbacks suppressed
	[Mar14 18:06] kauditd_printk_skb: 94 callbacks suppressed
	[  +6.397822] kauditd_printk_skb: 17 callbacks suppressed
	[  +5.713465] kauditd_printk_skb: 4 callbacks suppressed
	[  +9.180475] kauditd_printk_skb: 8 callbacks suppressed
	[ +16.582187] kauditd_printk_skb: 12 callbacks suppressed
	[ +11.033503] kauditd_printk_skb: 44 callbacks suppressed
	[  +5.130616] kauditd_printk_skb: 72 callbacks suppressed
	[Mar14 18:07] kauditd_printk_skb: 13 callbacks suppressed
	[  +5.175998] kauditd_printk_skb: 45 callbacks suppressed
	[ +12.114977] kauditd_printk_skb: 45 callbacks suppressed
	[  +5.268310] kauditd_printk_skb: 39 callbacks suppressed
	[  +5.151398] kauditd_printk_skb: 36 callbacks suppressed
	[ +11.144626] kauditd_printk_skb: 39 callbacks suppressed
	[Mar14 18:08] kauditd_printk_skb: 1 callbacks suppressed
	[ +18.383914] kauditd_printk_skb: 5 callbacks suppressed
	[  +8.548630] kauditd_printk_skb: 25 callbacks suppressed
	[Mar14 18:10] kauditd_printk_skb: 2 callbacks suppressed
	[  +5.286635] kauditd_printk_skb: 17 callbacks suppressed
	
	
	==> etcd [42145039a1927b2cc13a463a77e6364a1129ea2ab53423ec92d321fc3890a6b4] <==
	{"level":"warn","ts":"2024-03-14T18:07:08.728982Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:07:08.299135Z","time spent":"429.797274ms","remote":"127.0.0.1:56070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1126 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"warn","ts":"2024-03-14T18:07:08.728995Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:07:08.35839Z","time spent":"370.601544ms","remote":"127.0.0.1:55860","response type":"/etcdserverpb.KV/Range","request count":0,"request size":18,"response count":0,"response size":28,"request content":"key:\"/registry/health\" "}
	{"level":"warn","ts":"2024-03-14T18:07:08.729165Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"309.973572ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13724"}
	{"level":"info","ts":"2024-03-14T18:07:08.729181Z","caller":"traceutil/trace.go:171","msg":"trace[1731961128] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1127; }","duration":"309.990071ms","start":"2024-03-14T18:07:08.419187Z","end":"2024-03-14T18:07:08.729177Z","steps":["trace[1731961128] 'agreement among raft nodes before linearized reading'  (duration: 309.9424ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:07:08.729201Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:07:08.419175Z","time spent":"310.023122ms","remote":"127.0.0.1:56084","response type":"/etcdserverpb.KV/Range","request count":0,"request size":62,"response count":3,"response size":13747,"request content":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" "}
	{"level":"info","ts":"2024-03-14T18:07:11.620791Z","caller":"traceutil/trace.go:171","msg":"trace[1106111915] linearizableReadLoop","detail":"{readStateIndex:1164; appliedIndex:1163; }","duration":"354.28408ms","start":"2024-03-14T18:07:11.26649Z","end":"2024-03-14T18:07:11.620774Z","steps":["trace[1106111915] 'read index received'  (duration: 353.984487ms)","trace[1106111915] 'applied index is now lower than readState.Index'  (duration: 298.34µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T18:07:11.620942Z","caller":"traceutil/trace.go:171","msg":"trace[1660757450] transaction","detail":"{read_only:false; response_revision:1132; number_of_response:1; }","duration":"473.732691ms","start":"2024-03-14T18:07:11.147203Z","end":"2024-03-14T18:07:11.620935Z","steps":["trace[1660757450] 'process raft request'  (duration: 473.32244ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:07:11.621065Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"261.853661ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/health\" ","response":"range_response_count:0 size:5"}
	{"level":"info","ts":"2024-03-14T18:07:11.621135Z","caller":"traceutil/trace.go:171","msg":"trace[455281872] range","detail":"{range_begin:/registry/health; range_end:; response_count:0; response_revision:1132; }","duration":"261.930574ms","start":"2024-03-14T18:07:11.359194Z","end":"2024-03-14T18:07:11.621124Z","steps":["trace[455281872] 'agreement among raft nodes before linearized reading'  (duration: 261.827942ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:07:11.621276Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"202.390306ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/ingress-nginx/\" range_end:\"/registry/pods/ingress-nginx0\" ","response":"range_response_count:3 size:13724"}
	{"level":"info","ts":"2024-03-14T18:07:11.621352Z","caller":"traceutil/trace.go:171","msg":"trace[1856698493] range","detail":"{range_begin:/registry/pods/ingress-nginx/; range_end:/registry/pods/ingress-nginx0; response_count:3; response_revision:1132; }","duration":"202.436403ms","start":"2024-03-14T18:07:11.418877Z","end":"2024-03-14T18:07:11.621314Z","steps":["trace[1856698493] 'agreement among raft nodes before linearized reading'  (duration: 202.351818ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:07:11.621411Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"354.933145ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" ","response":"range_response_count:3 size:10814"}
	{"level":"info","ts":"2024-03-14T18:07:11.621466Z","caller":"traceutil/trace.go:171","msg":"trace[1927865321] range","detail":"{range_begin:/registry/pods/gcp-auth/; range_end:/registry/pods/gcp-auth0; response_count:3; response_revision:1132; }","duration":"354.991437ms","start":"2024-03-14T18:07:11.266468Z","end":"2024-03-14T18:07:11.621459Z","steps":["trace[1927865321] 'agreement among raft nodes before linearized reading'  (duration: 354.894533ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:07:11.621498Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:07:11.266455Z","time spent":"355.035472ms","remote":"127.0.0.1:56084","response type":"/etcdserverpb.KV/Range","request count":0,"request size":52,"response count":3,"response size":10837,"request content":"key:\"/registry/pods/gcp-auth/\" range_end:\"/registry/pods/gcp-auth0\" "}
	{"level":"warn","ts":"2024-03-14T18:07:11.621085Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:07:11.147189Z","time spent":"473.794884ms","remote":"127.0.0.1:56138","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":483,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/snapshot-controller-leader\" mod_revision:1125 > success:<request_put:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" value_size:420 >> failure:<request_range:<key:\"/registry/leases/kube-system/snapshot-controller-leader\" > >"}
	{"level":"info","ts":"2024-03-14T18:07:37.223282Z","caller":"traceutil/trace.go:171","msg":"trace[346388495] transaction","detail":"{read_only:false; response_revision:1363; number_of_response:1; }","duration":"329.147479ms","start":"2024-03-14T18:07:36.894121Z","end":"2024-03-14T18:07:37.223268Z","steps":["trace[346388495] 'process raft request'  (duration: 328.919127ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:07:37.223457Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:07:36.894108Z","time spent":"329.261898ms","remote":"127.0.0.1:56070","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":1098,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" mod_revision:1358 > success:<request_put:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" value_size:1025 >> failure:<request_range:<key:\"/registry/services/endpoints/kube-system/k8s.io-minikube-hostpath\" > >"}
	{"level":"info","ts":"2024-03-14T18:07:37.224008Z","caller":"traceutil/trace.go:171","msg":"trace[1550010564] linearizableReadLoop","detail":"{readStateIndex:1405; appliedIndex:1404; }","duration":"141.938632ms","start":"2024-03-14T18:07:37.082058Z","end":"2024-03-14T18:07:37.223997Z","steps":["trace[1550010564] 'read index received'  (duration: 141.550039ms)","trace[1550010564] 'applied index is now lower than readState.Index'  (duration: 388.08µs)"],"step_count":2}
	{"level":"info","ts":"2024-03-14T18:07:37.224164Z","caller":"traceutil/trace.go:171","msg":"trace[450125515] transaction","detail":"{read_only:false; response_revision:1364; number_of_response:1; }","duration":"320.818845ms","start":"2024-03-14T18:07:36.903337Z","end":"2024-03-14T18:07:37.224156Z","steps":["trace[450125515] 'process raft request'  (duration: 320.5528ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:07:37.224223Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:07:36.903323Z","time spent":"320.86512ms","remote":"127.0.0.1:56138","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":678,"response count":0,"response size":39,"request content":"compare:<target:MOD key:\"/registry/leases/kube-system/apiserver-cae7iulqqjhhiwogdcbk5edsbm\" mod_revision:1258 > success:<request_put:<key:\"/registry/leases/kube-system/apiserver-cae7iulqqjhhiwogdcbk5edsbm\" value_size:605 >> failure:<request_range:<key:\"/registry/leases/kube-system/apiserver-cae7iulqqjhhiwogdcbk5edsbm\" > >"}
	{"level":"warn","ts":"2024-03-14T18:07:37.224332Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"142.320366ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/persistentvolumeclaims/default/hpvc\" ","response":"range_response_count:1 size:822"}
	{"level":"info","ts":"2024-03-14T18:07:37.224357Z","caller":"traceutil/trace.go:171","msg":"trace[423980672] range","detail":"{range_begin:/registry/persistentvolumeclaims/default/hpvc; range_end:; response_count:1; response_revision:1364; }","duration":"142.345363ms","start":"2024-03-14T18:07:37.082006Z","end":"2024-03-14T18:07:37.224351Z","steps":["trace[423980672] 'agreement among raft nodes before linearized reading'  (duration: 142.299859ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:07:55.850112Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"280.306926ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/masterleases/192.168.39.215\" ","response":"range_response_count:1 size:135"}
	{"level":"info","ts":"2024-03-14T18:07:55.850163Z","caller":"traceutil/trace.go:171","msg":"trace[1751529080] range","detail":"{range_begin:/registry/masterleases/192.168.39.215; range_end:; response_count:1; response_revision:1532; }","duration":"280.366053ms","start":"2024-03-14T18:07:55.569785Z","end":"2024-03-14T18:07:55.850151Z","steps":["trace[1751529080] 'range keys from in-memory index tree'  (duration: 280.222663ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:07:58.809072Z","caller":"traceutil/trace.go:171","msg":"trace[1902656510] transaction","detail":"{read_only:false; response_revision:1537; number_of_response:1; }","duration":"210.795851ms","start":"2024-03-14T18:07:58.598246Z","end":"2024-03-14T18:07:58.809042Z","steps":["trace[1902656510] 'process raft request'  (duration: 210.381724ms)"],"step_count":1}
	
	
	==> gcp-auth [00106371aee10523481f63f0ae9c4b7053e01868d286f26c7bd2ef50aadc0b78] <==
	2024/03/14 18:07:14 GCP Auth Webhook started!
	2024/03/14 18:07:22 Ready to marshal response ...
	2024/03/14 18:07:22 Ready to write response ...
	2024/03/14 18:07:22 Ready to marshal response ...
	2024/03/14 18:07:22 Ready to write response ...
	2024/03/14 18:07:25 Ready to marshal response ...
	2024/03/14 18:07:25 Ready to write response ...
	2024/03/14 18:07:26 Ready to marshal response ...
	2024/03/14 18:07:26 Ready to write response ...
	2024/03/14 18:07:32 Ready to marshal response ...
	2024/03/14 18:07:32 Ready to write response ...
	2024/03/14 18:07:45 Ready to marshal response ...
	2024/03/14 18:07:45 Ready to write response ...
	2024/03/14 18:07:45 Ready to marshal response ...
	2024/03/14 18:07:45 Ready to write response ...
	2024/03/14 18:07:45 Ready to marshal response ...
	2024/03/14 18:07:45 Ready to write response ...
	2024/03/14 18:07:45 Ready to marshal response ...
	2024/03/14 18:07:45 Ready to write response ...
	2024/03/14 18:07:46 Ready to marshal response ...
	2024/03/14 18:07:46 Ready to write response ...
	2024/03/14 18:08:17 Ready to marshal response ...
	2024/03/14 18:08:17 Ready to write response ...
	2024/03/14 18:10:07 Ready to marshal response ...
	2024/03/14 18:10:07 Ready to write response ...
	
	
	==> kernel <==
	 18:10:18 up 5 min,  0 users,  load average: 0.99, 1.67, 0.85
	Linux addons-677681 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [085f15909479b24319d72c5103b206e0341aa88aab190e6ec8c1911c08dd1d37] <==
	I0314 18:07:45.023182       1 controller.go:624] quota admission added evaluator for: ingresses.networking.k8s.io
	I0314 18:07:45.260514       1 alloc.go:330] "allocated clusterIPs" service="default/nginx" clusterIPs={"IPv4":"10.108.197.248"}
	I0314 18:07:45.658033       1 alloc.go:330] "allocated clusterIPs" service="headlamp/headlamp" clusterIPs={"IPv4":"10.110.137.156"}
	E0314 18:07:48.687632       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"local-path-provisioner-service-account\" not found]"
	I0314 18:08:05.300539       1 controller.go:624] quota admission added evaluator for: volumesnapshots.snapshot.storage.k8s.io
	I0314 18:08:15.345252       1 controller.go:129] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Nothing (removed from the queue).
	I0314 18:08:33.102360       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:08:33.102783       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:08:33.115162       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:08:33.116316       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:08:33.143931       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:08:33.144312       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:08:33.186601       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:08:33.186739       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:08:33.194093       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:08:33.194238       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:08:33.241117       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:08:33.241190       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	I0314 18:08:33.261187       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1 to ResourceManager
	I0314 18:08:33.261253       1 handler.go:232] Adding GroupVersion snapshot.storage.k8s.io v1beta1 to ResourceManager
	W0314 18:08:34.187624       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotclasses.snapshot.storage.k8s.io
	W0314 18:08:34.261461       1 cacher.go:171] Terminating all watchers from cacher volumesnapshots.snapshot.storage.k8s.io
	W0314 18:08:34.266740       1 cacher.go:171] Terminating all watchers from cacher volumesnapshotcontents.snapshot.storage.k8s.io
	I0314 18:10:07.646856       1 alloc.go:330] "allocated clusterIPs" service="default/hello-world-app" clusterIPs={"IPv4":"10.104.156.115"}
	E0314 18:10:10.010785       1 authentication.go:73] "Unable to authenticate the request" err="[invalid bearer token, serviceaccounts \"ingress-nginx\" not found]"
	
	
	==> kube-controller-manager [8cbf041a827d5e777fa2bb34308264337086746a0d0826f0c7a3486ba6568d3a] <==
	E0314 18:09:13.264942       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0314 18:09:15.147829       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 18:09:15.148011       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0314 18:09:35.595794       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 18:09:35.595878       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0314 18:09:43.259923       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 18:09:43.260033       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0314 18:09:51.832008       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 18:09:51.832068       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	W0314 18:09:59.988240       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 18:09:59.988412       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	I0314 18:10:07.438336       1 event.go:307] "Event occurred" object="default/hello-world-app" fieldPath="" kind="Deployment" apiVersion="apps/v1" type="Normal" reason="ScalingReplicaSet" message="Scaled up replica set hello-world-app-5d77478584 to 1"
	I0314 18:10:07.473506       1 event.go:307] "Event occurred" object="default/hello-world-app-5d77478584" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: hello-world-app-5d77478584-c9vzt"
	I0314 18:10:07.494349       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="57.108721ms"
	I0314 18:10:07.508848       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="14.265027ms"
	I0314 18:10:07.509080       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="104.284µs"
	I0314 18:10:07.509296       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="100.322µs"
	I0314 18:10:07.521733       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="20.737µs"
	I0314 18:10:09.885321       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-create"
	I0314 18:10:09.893962       1 job_controller.go:562] "enqueueing job" key="ingress-nginx/ingress-nginx-admission-patch"
	I0314 18:10:09.895005       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="ingress-nginx/ingress-nginx-controller-76dc478dd8" duration="6.988µs"
	I0314 18:10:10.021472       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="9.231762ms"
	I0314 18:10:10.022261       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/hello-world-app-5d77478584" duration="54.401µs"
	W0314 18:10:15.474610       1 reflector.go:535] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	E0314 18:10:15.474884       1 reflector.go:147] vendor/k8s.io/client-go/metadata/metadatainformer/informer.go:106: Failed to watch *v1.PartialObjectMetadata: failed to list *v1.PartialObjectMetadata: the server could not find the requested resource
	
	
	==> kube-proxy [7d4d2e14daee94ecc03ba7b53b3fd41e7c3cbdb1f517d95bab65d60ebcc46942] <==
	I0314 18:05:53.730634       1 server_others.go:69] "Using iptables proxy"
	I0314 18:05:53.754520       1 node.go:141] Successfully retrieved node IP: 192.168.39.215
	I0314 18:05:53.941541       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:05:53.941560       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:05:53.949759       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:05:53.949810       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:05:53.949985       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:05:53.949993       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:05:53.951840       1 config.go:188] "Starting service config controller"
	I0314 18:05:53.951854       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:05:53.951874       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:05:53.951877       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:05:53.952145       1 config.go:315] "Starting node config controller"
	I0314 18:05:53.952151       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:05:54.051993       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 18:05:54.052029       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:05:54.052763       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [993d781fd1255b5ffa919691d2bdb5293f87f6082aba5ab9670c8664ec5541f9] <==
	W0314 18:05:33.950508       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 18:05:33.950535       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 18:05:33.950586       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 18:05:33.950613       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 18:05:33.950754       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 18:05:33.950790       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 18:05:33.951043       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 18:05:33.951068       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 18:05:34.760706       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 18:05:34.760823       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 18:05:34.782173       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 18:05:34.782262       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 18:05:34.789839       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 18:05:34.789913       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 18:05:35.050188       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 18:05:35.050330       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 18:05:35.098427       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 18:05:35.098542       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 18:05:35.154996       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 18:05:35.155046       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 18:05:35.186392       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 18:05:35.186478       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 18:05:35.419462       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 18:05:35.420063       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 18:05:37.343021       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 18:10:07 addons-677681 kubelet[1249]: I0314 18:10:07.488611    1249 memory_manager.go:346] "RemoveStaleState removing state" podUID="8eaca7a1-3978-4b9c-bd10-3238f6235de7" containerName="csi-external-health-monitor-controller"
	Mar 14 18:10:07 addons-677681 kubelet[1249]: I0314 18:10:07.488617    1249 memory_manager.go:346] "RemoveStaleState removing state" podUID="8eaca7a1-3978-4b9c-bd10-3238f6235de7" containerName="liveness-probe"
	Mar 14 18:10:07 addons-677681 kubelet[1249]: I0314 18:10:07.488623    1249 memory_manager.go:346] "RemoveStaleState removing state" podUID="8eaca7a1-3978-4b9c-bd10-3238f6235de7" containerName="csi-provisioner"
	Mar 14 18:10:07 addons-677681 kubelet[1249]: I0314 18:10:07.488629    1249 memory_manager.go:346] "RemoveStaleState removing state" podUID="74d8e461-bbe7-4442-9936-1913839aa73c" containerName="task-pv-container"
	Mar 14 18:10:07 addons-677681 kubelet[1249]: I0314 18:10:07.488635    1249 memory_manager.go:346] "RemoveStaleState removing state" podUID="a018922c-4b39-4bf8-9fea-f3198c1e65ab" containerName="volume-snapshot-controller"
	Mar 14 18:10:07 addons-677681 kubelet[1249]: I0314 18:10:07.619016    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"gcp-creds\" (UniqueName: \"kubernetes.io/host-path/be7db71b-156d-4594-a59d-837db6684f84-gcp-creds\") pod \"hello-world-app-5d77478584-c9vzt\" (UID: \"be7db71b-156d-4594-a59d-837db6684f84\") " pod="default/hello-world-app-5d77478584-c9vzt"
	Mar 14 18:10:07 addons-677681 kubelet[1249]: I0314 18:10:07.619117    1249 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pn6ts\" (UniqueName: \"kubernetes.io/projected/be7db71b-156d-4594-a59d-837db6684f84-kube-api-access-pn6ts\") pod \"hello-world-app-5d77478584-c9vzt\" (UID: \"be7db71b-156d-4594-a59d-837db6684f84\") " pod="default/hello-world-app-5d77478584-c9vzt"
	Mar 14 18:10:08 addons-677681 kubelet[1249]: I0314 18:10:08.930069    1249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5j6f7\" (UniqueName: \"kubernetes.io/projected/afa358a2-8cac-48fc-a475-8fd228291c6f-kube-api-access-5j6f7\") pod \"afa358a2-8cac-48fc-a475-8fd228291c6f\" (UID: \"afa358a2-8cac-48fc-a475-8fd228291c6f\") "
	Mar 14 18:10:08 addons-677681 kubelet[1249]: I0314 18:10:08.947009    1249 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/afa358a2-8cac-48fc-a475-8fd228291c6f-kube-api-access-5j6f7" (OuterVolumeSpecName: "kube-api-access-5j6f7") pod "afa358a2-8cac-48fc-a475-8fd228291c6f" (UID: "afa358a2-8cac-48fc-a475-8fd228291c6f"). InnerVolumeSpecName "kube-api-access-5j6f7". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 14 18:10:08 addons-677681 kubelet[1249]: I0314 18:10:08.973849    1249 scope.go:117] "RemoveContainer" containerID="86b0481d7a0302dc459d9936943f9e75d9a16e56b1d95d81f10a479845cf1e60"
	Mar 14 18:10:09 addons-677681 kubelet[1249]: I0314 18:10:09.030404    1249 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-5j6f7\" (UniqueName: \"kubernetes.io/projected/afa358a2-8cac-48fc-a475-8fd228291c6f-kube-api-access-5j6f7\") on node \"addons-677681\" DevicePath \"\""
	Mar 14 18:10:09 addons-677681 kubelet[1249]: I0314 18:10:09.123964    1249 scope.go:117] "RemoveContainer" containerID="86b0481d7a0302dc459d9936943f9e75d9a16e56b1d95d81f10a479845cf1e60"
	Mar 14 18:10:09 addons-677681 kubelet[1249]: E0314 18:10:09.124865    1249 remote_runtime.go:432] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b0481d7a0302dc459d9936943f9e75d9a16e56b1d95d81f10a479845cf1e60\": container with ID starting with 86b0481d7a0302dc459d9936943f9e75d9a16e56b1d95d81f10a479845cf1e60 not found: ID does not exist" containerID="86b0481d7a0302dc459d9936943f9e75d9a16e56b1d95d81f10a479845cf1e60"
	Mar 14 18:10:09 addons-677681 kubelet[1249]: I0314 18:10:09.124943    1249 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b0481d7a0302dc459d9936943f9e75d9a16e56b1d95d81f10a479845cf1e60"} err="failed to get container status \"86b0481d7a0302dc459d9936943f9e75d9a16e56b1d95d81f10a479845cf1e60\": rpc error: code = NotFound desc = could not find container \"86b0481d7a0302dc459d9936943f9e75d9a16e56b1d95d81f10a479845cf1e60\": container with ID starting with 86b0481d7a0302dc459d9936943f9e75d9a16e56b1d95d81f10a479845cf1e60 not found: ID does not exist"
	Mar 14 18:10:09 addons-677681 kubelet[1249]: I0314 18:10:09.351814    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="afa358a2-8cac-48fc-a475-8fd228291c6f" path="/var/lib/kubelet/pods/afa358a2-8cac-48fc-a475-8fd228291c6f/volumes"
	Mar 14 18:10:11 addons-677681 kubelet[1249]: I0314 18:10:11.352529    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="06bd68c5-846d-49d7-a756-b59926e729d9" path="/var/lib/kubelet/pods/06bd68c5-846d-49d7-a756-b59926e729d9/volumes"
	Mar 14 18:10:11 addons-677681 kubelet[1249]: I0314 18:10:11.353078    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="b329d666-1dc2-4cdd-8739-e5edd448c7b6" path="/var/lib/kubelet/pods/b329d666-1dc2-4cdd-8739-e5edd448c7b6/volumes"
	Mar 14 18:10:13 addons-677681 kubelet[1249]: I0314 18:10:13.173870    1249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"kube-api-access-p4z68\" (UniqueName: \"kubernetes.io/projected/dfb369fd-4dc8-4aa1-8a60-f6325693eddd-kube-api-access-p4z68\") pod \"dfb369fd-4dc8-4aa1-8a60-f6325693eddd\" (UID: \"dfb369fd-4dc8-4aa1-8a60-f6325693eddd\") "
	Mar 14 18:10:13 addons-677681 kubelet[1249]: I0314 18:10:13.173911    1249 reconciler_common.go:172] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfb369fd-4dc8-4aa1-8a60-f6325693eddd-webhook-cert\") pod \"dfb369fd-4dc8-4aa1-8a60-f6325693eddd\" (UID: \"dfb369fd-4dc8-4aa1-8a60-f6325693eddd\") "
	Mar 14 18:10:13 addons-677681 kubelet[1249]: I0314 18:10:13.176758    1249 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dfb369fd-4dc8-4aa1-8a60-f6325693eddd-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "dfb369fd-4dc8-4aa1-8a60-f6325693eddd" (UID: "dfb369fd-4dc8-4aa1-8a60-f6325693eddd"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGidValue ""
	Mar 14 18:10:13 addons-677681 kubelet[1249]: I0314 18:10:13.183392    1249 operation_generator.go:882] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dfb369fd-4dc8-4aa1-8a60-f6325693eddd-kube-api-access-p4z68" (OuterVolumeSpecName: "kube-api-access-p4z68") pod "dfb369fd-4dc8-4aa1-8a60-f6325693eddd" (UID: "dfb369fd-4dc8-4aa1-8a60-f6325693eddd"). InnerVolumeSpecName "kube-api-access-p4z68". PluginName "kubernetes.io/projected", VolumeGidValue ""
	Mar 14 18:10:13 addons-677681 kubelet[1249]: I0314 18:10:13.274854    1249 reconciler_common.go:300] "Volume detached for volume \"kube-api-access-p4z68\" (UniqueName: \"kubernetes.io/projected/dfb369fd-4dc8-4aa1-8a60-f6325693eddd-kube-api-access-p4z68\") on node \"addons-677681\" DevicePath \"\""
	Mar 14 18:10:13 addons-677681 kubelet[1249]: I0314 18:10:13.274885    1249 reconciler_common.go:300] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/dfb369fd-4dc8-4aa1-8a60-f6325693eddd-webhook-cert\") on node \"addons-677681\" DevicePath \"\""
	Mar 14 18:10:13 addons-677681 kubelet[1249]: I0314 18:10:13.351510    1249 kubelet_volumes.go:161] "Cleaned up orphaned pod volumes dir" podUID="dfb369fd-4dc8-4aa1-8a60-f6325693eddd" path="/var/lib/kubelet/pods/dfb369fd-4dc8-4aa1-8a60-f6325693eddd/volumes"
	Mar 14 18:10:14 addons-677681 kubelet[1249]: I0314 18:10:14.013300    1249 scope.go:117] "RemoveContainer" containerID="f2705f9775214438af21a6b58340a5da7cb67bc67a113ecb7bf9a20d84b2e79c"
	
	
	==> storage-provisioner [3c15ea11c647af8023beb566338ff60ab8fb476f2d1cd0e951741c244c4b0a76] <==
	I0314 18:06:03.628551       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 18:06:03.648309       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 18:06:03.648373       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 18:06:03.803380       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 18:06:03.803553       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_addons-677681_b0f734bc-4c96-451a-af81-09e160b85d86!
	I0314 18:06:03.955942       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"d35d26d1-7977-445a-8e4a-ce1c135dd687", APIVersion:"v1", ResourceVersion:"785", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' addons-677681_b0f734bc-4c96-451a-af81-09e160b85d86 became leader
	I0314 18:06:04.057883       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_addons-677681_b0f734bc-4c96-451a-af81-09e160b85d86!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p addons-677681 -n addons-677681
helpers_test.go:261: (dbg) Run:  kubectl --context addons-677681 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestAddons/parallel/Ingress FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestAddons/parallel/Ingress (154.38s)

                                                
                                    
x
+
TestAddons/StoppedEnableDisable (154.3s)

                                                
                                                
=== RUN   TestAddons/StoppedEnableDisable
addons_test.go:172: (dbg) Run:  out/minikube-linux-amd64 stop -p addons-677681
addons_test.go:172: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p addons-677681: exit status 82 (2m0.487962511s)

                                                
                                                
-- stdout --
	* Stopping node "addons-677681"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:174: failed to stop minikube. args "out/minikube-linux-amd64 stop -p addons-677681" : exit status 82
addons_test.go:176: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-677681
addons_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-677681: exit status 11 (21.523392513s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:178: failed to enable dashboard addon: args "out/minikube-linux-amd64 addons enable dashboard -p addons-677681" : exit status 11
addons_test.go:180: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-677681
addons_test.go:180: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-677681: exit status 11 (6.144492272s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_7b2045b3edf32de99b3c34afdc43bfaabe8aa3c2_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:182: failed to disable dashboard addon: args "out/minikube-linux-amd64 addons disable dashboard -p addons-677681" : exit status 11
addons_test.go:185: (dbg) Run:  out/minikube-linux-amd64 addons disable gvisor -p addons-677681
addons_test.go:185: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable gvisor -p addons-677681: exit status 11 (6.14388978s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_DISABLE_PAUSED: disable failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.215:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_8dd43b2cee45a94e37dbac1dd983966d1c97e7d4_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
addons_test.go:187: failed to disable non-enabled addon: args "out/minikube-linux-amd64 addons disable gvisor -p addons-677681" : exit status 11
--- FAIL: TestAddons/StoppedEnableDisable (154.30s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadDaemon (11.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadDaemon
functional_test.go:354: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image load --daemon gcr.io/google-containers/addon-resizer:functional-059245 --alsologtostderr
functional_test.go:354: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 image load --daemon gcr.io/google-containers/addon-resizer:functional-059245 --alsologtostderr: (8.749563371s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image ls
functional_test.go:447: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 image ls: (2.277731428s)
functional_test.go:442: expected "gcr.io/google-containers/addon-resizer:functional-059245" to be loaded into minikube but the image is not there
--- FAIL: TestFunctional/parallel/ImageCommands/ImageLoadDaemon (11.03s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopSecondaryNode (150.48s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopSecondaryNode
ha_test.go:363: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 node stop m02 -v=7 --alsologtostderr
E0314 18:22:42.543623  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
E0314 18:22:55.492527  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:23:36.452761  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
ha_test.go:363: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 node stop m02 -v=7 --alsologtostderr: exit status 30 (2m0.507393642s)

                                                
                                                
-- stdout --
	* Stopping node "ha-105786-m02"  ...

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:22:41.259667  964440 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:22:41.259905  964440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:22:41.259914  964440 out.go:304] Setting ErrFile to fd 2...
	I0314 18:22:41.259918  964440 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:22:41.260076  964440 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:22:41.260369  964440 mustload.go:65] Loading cluster: ha-105786
	I0314 18:22:41.260809  964440 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:22:41.260832  964440 stop.go:39] StopHost: ha-105786-m02
	I0314 18:22:41.261215  964440 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:22:41.261270  964440 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:22:41.278539  964440 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46365
	I0314 18:22:41.279031  964440 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:22:41.279611  964440 main.go:141] libmachine: Using API Version  1
	I0314 18:22:41.279641  964440 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:22:41.280031  964440 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:22:41.282041  964440 out.go:177] * Stopping node "ha-105786-m02"  ...
	I0314 18:22:41.283337  964440 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0314 18:22:41.283386  964440 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:22:41.283642  964440 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0314 18:22:41.283676  964440 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:22:41.286871  964440 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:22:41.287373  964440 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:22:41.287402  964440 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:22:41.287561  964440 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:22:41.287774  964440 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:22:41.287962  964440 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:22:41.288105  964440 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	I0314 18:22:41.379187  964440 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0314 18:22:41.437607  964440 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0314 18:22:41.492923  964440 main.go:141] libmachine: Stopping "ha-105786-m02"...
	I0314 18:22:41.492957  964440 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:22:41.494463  964440 main.go:141] libmachine: (ha-105786-m02) Calling .Stop
	I0314 18:22:41.498188  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 0/120
	I0314 18:22:42.500284  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 1/120
	I0314 18:22:43.501769  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 2/120
	I0314 18:22:44.503016  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 3/120
	I0314 18:22:45.505136  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 4/120
	I0314 18:22:46.507189  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 5/120
	I0314 18:22:47.508500  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 6/120
	I0314 18:22:48.510649  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 7/120
	I0314 18:22:49.511988  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 8/120
	I0314 18:22:50.513268  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 9/120
	I0314 18:22:51.515769  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 10/120
	I0314 18:22:52.517857  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 11/120
	I0314 18:22:53.519148  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 12/120
	I0314 18:22:54.521043  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 13/120
	I0314 18:22:55.522869  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 14/120
	I0314 18:22:56.524654  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 15/120
	I0314 18:22:57.526916  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 16/120
	I0314 18:22:58.528378  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 17/120
	I0314 18:22:59.530688  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 18/120
	I0314 18:23:00.531996  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 19/120
	I0314 18:23:01.534226  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 20/120
	I0314 18:23:02.535540  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 21/120
	I0314 18:23:03.537593  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 22/120
	I0314 18:23:04.539260  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 23/120
	I0314 18:23:05.540759  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 24/120
	I0314 18:23:06.542798  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 25/120
	I0314 18:23:07.544274  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 26/120
	I0314 18:23:08.546116  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 27/120
	I0314 18:23:09.547571  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 28/120
	I0314 18:23:10.548918  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 29/120
	I0314 18:23:11.551083  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 30/120
	I0314 18:23:12.553378  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 31/120
	I0314 18:23:13.554938  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 32/120
	I0314 18:23:14.556707  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 33/120
	I0314 18:23:15.558119  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 34/120
	I0314 18:23:16.559756  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 35/120
	I0314 18:23:17.561172  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 36/120
	I0314 18:23:18.562863  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 37/120
	I0314 18:23:19.564417  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 38/120
	I0314 18:23:20.566809  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 39/120
	I0314 18:23:21.568619  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 40/120
	I0314 18:23:22.570667  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 41/120
	I0314 18:23:23.572002  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 42/120
	I0314 18:23:24.573550  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 43/120
	I0314 18:23:25.575120  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 44/120
	I0314 18:23:26.577075  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 45/120
	I0314 18:23:27.579237  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 46/120
	I0314 18:23:28.580644  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 47/120
	I0314 18:23:29.582822  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 48/120
	I0314 18:23:30.584369  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 49/120
	I0314 18:23:31.585568  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 50/120
	I0314 18:23:32.587054  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 51/120
	I0314 18:23:33.588481  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 52/120
	I0314 18:23:34.589992  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 53/120
	I0314 18:23:35.591668  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 54/120
	I0314 18:23:36.593463  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 55/120
	I0314 18:23:37.594806  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 56/120
	I0314 18:23:38.596244  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 57/120
	I0314 18:23:39.598591  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 58/120
	I0314 18:23:40.599928  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 59/120
	I0314 18:23:41.601843  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 60/120
	I0314 18:23:42.603471  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 61/120
	I0314 18:23:43.605566  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 62/120
	I0314 18:23:44.607222  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 63/120
	I0314 18:23:45.609217  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 64/120
	I0314 18:23:46.611267  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 65/120
	I0314 18:23:47.612589  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 66/120
	I0314 18:23:48.614119  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 67/120
	I0314 18:23:49.616093  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 68/120
	I0314 18:23:50.618066  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 69/120
	I0314 18:23:51.620295  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 70/120
	I0314 18:23:52.621646  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 71/120
	I0314 18:23:53.622997  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 72/120
	I0314 18:23:54.624381  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 73/120
	I0314 18:23:55.626783  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 74/120
	I0314 18:23:56.628649  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 75/120
	I0314 18:23:57.630017  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 76/120
	I0314 18:23:58.631396  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 77/120
	I0314 18:23:59.633669  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 78/120
	I0314 18:24:00.636182  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 79/120
	I0314 18:24:01.638406  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 80/120
	I0314 18:24:02.639701  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 81/120
	I0314 18:24:03.641217  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 82/120
	I0314 18:24:04.642898  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 83/120
	I0314 18:24:05.644289  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 84/120
	I0314 18:24:06.646121  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 85/120
	I0314 18:24:07.647375  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 86/120
	I0314 18:24:08.649104  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 87/120
	I0314 18:24:09.651425  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 88/120
	I0314 18:24:10.653019  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 89/120
	I0314 18:24:11.655085  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 90/120
	I0314 18:24:12.656543  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 91/120
	I0314 18:24:13.657917  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 92/120
	I0314 18:24:14.659398  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 93/120
	I0314 18:24:15.660860  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 94/120
	I0314 18:24:16.662737  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 95/120
	I0314 18:24:17.664926  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 96/120
	I0314 18:24:18.666786  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 97/120
	I0314 18:24:19.668302  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 98/120
	I0314 18:24:20.669939  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 99/120
	I0314 18:24:21.672244  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 100/120
	I0314 18:24:22.673758  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 101/120
	I0314 18:24:23.675130  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 102/120
	I0314 18:24:24.676658  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 103/120
	I0314 18:24:25.678938  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 104/120
	I0314 18:24:26.680853  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 105/120
	I0314 18:24:27.682841  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 106/120
	I0314 18:24:28.684432  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 107/120
	I0314 18:24:29.685648  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 108/120
	I0314 18:24:30.687223  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 109/120
	I0314 18:24:31.689731  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 110/120
	I0314 18:24:32.691311  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 111/120
	I0314 18:24:33.692823  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 112/120
	I0314 18:24:34.694891  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 113/120
	I0314 18:24:35.696283  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 114/120
	I0314 18:24:36.698154  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 115/120
	I0314 18:24:37.699487  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 116/120
	I0314 18:24:38.701894  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 117/120
	I0314 18:24:39.703399  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 118/120
	I0314 18:24:40.704874  964440 main.go:141] libmachine: (ha-105786-m02) Waiting for machine to stop 119/120
	I0314 18:24:41.705651  964440 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0314 18:24:41.705878  964440 out.go:239] X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"
	X Failed to stop node m02: Temporary Error: stop: unable to stop vm, current state "Running"

                                                
                                                
** /stderr **
ha_test.go:365: secondary control-plane node stop returned an error. args "out/minikube-linux-amd64 -p ha-105786 node stop m02 -v=7 --alsologtostderr": exit status 30
ha_test.go:369: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
E0314 18:24:58.373787  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
ha_test.go:369: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr: exit status 3 (27.46026311s)

                                                
                                                
-- stdout --
	ha-105786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-105786-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:24:41.768567  964758 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:24:41.768839  964758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:24:41.768870  964758 out.go:304] Setting ErrFile to fd 2...
	I0314 18:24:41.768886  964758 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:24:41.769475  964758 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:24:41.769734  964758 out.go:298] Setting JSON to false
	I0314 18:24:41.769802  964758 mustload.go:65] Loading cluster: ha-105786
	I0314 18:24:41.769899  964758 notify.go:220] Checking for updates...
	I0314 18:24:41.770297  964758 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:24:41.770320  964758 status.go:255] checking status of ha-105786 ...
	I0314 18:24:41.770714  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:24:41.770786  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:24:41.790569  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35247
	I0314 18:24:41.791133  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:24:41.791794  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:24:41.791815  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:24:41.792240  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:24:41.792483  964758 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:24:41.794297  964758 status.go:330] ha-105786 host status = "Running" (err=<nil>)
	I0314 18:24:41.794326  964758 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:24:41.794655  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:24:41.794698  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:24:41.809613  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33737
	I0314 18:24:41.809998  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:24:41.810509  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:24:41.810530  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:24:41.810855  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:24:41.811042  964758 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:24:41.813662  964758 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:24:41.814095  964758 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:24:41.814137  964758 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:24:41.814202  964758 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:24:41.814527  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:24:41.814563  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:24:41.830133  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44053
	I0314 18:24:41.830516  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:24:41.830935  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:24:41.830963  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:24:41.831281  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:24:41.831464  964758 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:24:41.831697  964758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:24:41.831719  964758 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:24:41.834303  964758 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:24:41.834671  964758 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:24:41.834705  964758 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:24:41.834823  964758 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:24:41.834994  964758 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:24:41.835154  964758 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:24:41.835300  964758 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:24:41.928140  964758 ssh_runner.go:195] Run: systemctl --version
	I0314 18:24:41.936626  964758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:24:41.957547  964758 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:24:41.957578  964758 api_server.go:166] Checking apiserver status ...
	I0314 18:24:41.957634  964758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:24:41.973438  964758 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0314 18:24:41.983766  964758 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:24:41.983820  964758 ssh_runner.go:195] Run: ls
	I0314 18:24:41.989015  964758 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:24:46.989581  964758 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 18:24:46.989781  964758 retry.go:31] will retry after 265.361066ms: state is "Stopped"
	I0314 18:24:47.256261  964758 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:24:50.301275  964758 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:24:50.301322  964758 status.go:422] ha-105786 apiserver status = Running (err=<nil>)
	I0314 18:24:50.301332  964758 status.go:257] ha-105786 status: &{Name:ha-105786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:24:50.301354  964758 status.go:255] checking status of ha-105786-m02 ...
	I0314 18:24:50.301736  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:24:50.301776  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:24:50.317130  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43289
	I0314 18:24:50.317565  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:24:50.318035  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:24:50.318058  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:24:50.318534  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:24:50.318780  964758 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:24:50.320423  964758 status.go:330] ha-105786-m02 host status = "Running" (err=<nil>)
	I0314 18:24:50.320442  964758 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:24:50.320763  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:24:50.320809  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:24:50.338412  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43179
	I0314 18:24:50.338805  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:24:50.339330  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:24:50.339354  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:24:50.339673  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:24:50.339877  964758 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:24:50.342742  964758 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:24:50.343292  964758 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:24:50.343321  964758 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:24:50.343491  964758 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:24:50.343903  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:24:50.343959  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:24:50.359819  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33071
	I0314 18:24:50.360219  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:24:50.360699  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:24:50.360721  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:24:50.361065  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:24:50.361292  964758 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:24:50.361504  964758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:24:50.361531  964758 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:24:50.364641  964758 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:24:50.365090  964758 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:24:50.365118  964758 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:24:50.365319  964758 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:24:50.365506  964758 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:24:50.365710  964758 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:24:50.365885  964758 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	W0314 18:25:08.792416  964758 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0314 18:25:08.792505  964758 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0314 18:25:08.792523  964758 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:08.792532  964758 status.go:257] ha-105786-m02 status: &{Name:ha-105786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0314 18:25:08.792552  964758 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:08.792563  964758 status.go:255] checking status of ha-105786-m03 ...
	I0314 18:25:08.792894  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:08.792956  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:08.808250  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35461
	I0314 18:25:08.808717  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:08.809361  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:25:08.809387  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:08.809728  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:08.809927  964758 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:25:08.811665  964758 status.go:330] ha-105786-m03 host status = "Running" (err=<nil>)
	I0314 18:25:08.811683  964758 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:08.811983  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:08.812016  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:08.827834  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40315
	I0314 18:25:08.828302  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:08.828795  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:25:08.828815  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:08.829124  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:08.829374  964758 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:25:08.832171  964758 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:08.832609  964758 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:08.832635  964758 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:08.832797  964758 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:08.833082  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:08.833123  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:08.847571  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35065
	I0314 18:25:08.847970  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:08.848487  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:25:08.848522  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:08.848893  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:08.849086  964758 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:25:08.849276  964758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:08.849302  964758 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:25:08.851836  964758 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:08.852312  964758 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:08.852346  964758 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:08.852494  964758 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:25:08.852656  964758 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:25:08.852799  964758 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:25:08.852907  964758 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:25:08.934389  964758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:08.953050  964758 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:08.953083  964758 api_server.go:166] Checking apiserver status ...
	I0314 18:25:08.953136  964758 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:08.969337  964758 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0314 18:25:08.980765  964758 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:08.980819  964758 ssh_runner.go:195] Run: ls
	I0314 18:25:08.988164  964758 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:08.996984  964758 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:08.997007  964758 status.go:422] ha-105786-m03 apiserver status = Running (err=<nil>)
	I0314 18:25:08.997015  964758 status.go:257] ha-105786-m03 status: &{Name:ha-105786-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:08.997031  964758 status.go:255] checking status of ha-105786-m04 ...
	I0314 18:25:08.997314  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:08.997349  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:09.012568  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42615
	I0314 18:25:09.013021  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:09.013532  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:25:09.013555  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:09.013858  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:09.014065  964758 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:25:09.015635  964758 status.go:330] ha-105786-m04 host status = "Running" (err=<nil>)
	I0314 18:25:09.015657  964758 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:09.015918  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:09.015952  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:09.030950  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34875
	I0314 18:25:09.031390  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:09.031817  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:25:09.031839  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:09.032203  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:09.032416  964758 main.go:141] libmachine: (ha-105786-m04) Calling .GetIP
	I0314 18:25:09.035139  964758 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:09.035554  964758 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:09.035580  964758 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:09.035717  964758 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:09.035988  964758 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:09.036023  964758 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:09.049899  964758 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45809
	I0314 18:25:09.050310  964758 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:09.050747  964758 main.go:141] libmachine: Using API Version  1
	I0314 18:25:09.050776  964758 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:09.051088  964758 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:09.051272  964758 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:25:09.051450  964758 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:09.051470  964758 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:25:09.053953  964758 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:09.054371  964758 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:09.054428  964758 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:09.054550  964758 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:25:09.054732  964758 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:25:09.054890  964758 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:25:09.055091  964758 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:25:09.146204  964758 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:09.164924  964758 status.go:257] ha-105786-m04 status: &{Name:ha-105786-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:372: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-105786 -n ha-105786
helpers_test.go:244: <<< TestMutliControlPlane/serial/StopSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/StopSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-105786 logs -n 25: (1.587975642s)
helpers_test.go:252: TestMutliControlPlane/serial/StopSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile3116594682/001/cp-test_ha-105786-m03.txt |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786:/home/docker/cp-test_ha-105786-m03_ha-105786.txt                       |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786 sudo cat                                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786.txt                                 |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m02:/home/docker/cp-test_ha-105786-m03_ha-105786-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m02 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04:/home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m04 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp testdata/cp-test.txt                                                | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile3116594682/001/cp-test_ha-105786-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786:/home/docker/cp-test_ha-105786-m04_ha-105786.txt                       |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786 sudo cat                                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786.txt                                 |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m02:/home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m02 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03:/home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m03 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-105786 node stop m02 -v=7                                                     | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:18:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:18:02.895267  960722 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:18:02.895394  960722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:18:02.895404  960722 out.go:304] Setting ErrFile to fd 2...
	I0314 18:18:02.895408  960722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:18:02.895618  960722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:18:02.896256  960722 out.go:298] Setting JSON to false
	I0314 18:18:02.897280  960722 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":93635,"bootTime":1710346648,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:18:02.897349  960722 start.go:139] virtualization: kvm guest
	I0314 18:18:02.899596  960722 out.go:177] * [ha-105786] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:18:02.900900  960722 notify.go:220] Checking for updates...
	I0314 18:18:02.902213  960722 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:18:02.903507  960722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:18:02.904780  960722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:18:02.905989  960722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:18:02.907362  960722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:18:02.908640  960722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:18:02.909996  960722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:18:02.946213  960722 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 18:18:02.947582  960722 start.go:297] selected driver: kvm2
	I0314 18:18:02.947601  960722 start.go:901] validating driver "kvm2" against <nil>
	I0314 18:18:02.947612  960722 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:18:02.948348  960722 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:18:02.948426  960722 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 18:18:02.962979  960722 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 18:18:02.963024  960722 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 18:18:02.963232  960722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:18:02.963264  960722 cni.go:84] Creating CNI manager for ""
	I0314 18:18:02.963271  960722 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0314 18:18:02.963282  960722 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 18:18:02.963335  960722 start.go:340] cluster config:
	{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0314 18:18:02.963430  960722 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:18:02.965090  960722 out.go:177] * Starting "ha-105786" primary control-plane node in "ha-105786" cluster
	I0314 18:18:02.966371  960722 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:18:02.966398  960722 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 18:18:02.966405  960722 cache.go:56] Caching tarball of preloaded images
	I0314 18:18:02.966471  960722 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:18:02.966481  960722 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:18:02.966775  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:18:02.966800  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json: {Name:mk72f73c0aa560b79de9e232e75bc80724a95ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:02.966930  960722 start.go:360] acquireMachinesLock for ha-105786: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:18:02.966958  960722 start.go:364] duration metric: took 15.33µs to acquireMachinesLock for "ha-105786"
	I0314 18:18:02.966974  960722 start.go:93] Provisioning new machine with config: &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:18:02.967026  960722 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 18:18:02.968645  960722 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:18:02.968768  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:18:02.968806  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:18:02.982864  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43579
	I0314 18:18:02.983324  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:18:02.983936  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:18:02.983955  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:18:02.984350  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:18:02.984629  960722 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:18:02.984761  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:02.984910  960722 start.go:159] libmachine.API.Create for "ha-105786" (driver="kvm2")
	I0314 18:18:02.984933  960722 client.go:168] LocalClient.Create starting
	I0314 18:18:02.984958  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 18:18:02.984991  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:18:02.985006  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:18:02.985059  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 18:18:02.985077  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:18:02.985087  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:18:02.985105  960722 main.go:141] libmachine: Running pre-create checks...
	I0314 18:18:02.985114  960722 main.go:141] libmachine: (ha-105786) Calling .PreCreateCheck
	I0314 18:18:02.985483  960722 main.go:141] libmachine: (ha-105786) Calling .GetConfigRaw
	I0314 18:18:02.985793  960722 main.go:141] libmachine: Creating machine...
	I0314 18:18:02.985807  960722 main.go:141] libmachine: (ha-105786) Calling .Create
	I0314 18:18:02.985947  960722 main.go:141] libmachine: (ha-105786) Creating KVM machine...
	I0314 18:18:02.987182  960722 main.go:141] libmachine: (ha-105786) DBG | found existing default KVM network
	I0314 18:18:02.987946  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:02.987802  960745 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0314 18:18:02.987990  960722 main.go:141] libmachine: (ha-105786) DBG | created network xml: 
	I0314 18:18:02.988005  960722 main.go:141] libmachine: (ha-105786) DBG | <network>
	I0314 18:18:02.988018  960722 main.go:141] libmachine: (ha-105786) DBG |   <name>mk-ha-105786</name>
	I0314 18:18:02.988026  960722 main.go:141] libmachine: (ha-105786) DBG |   <dns enable='no'/>
	I0314 18:18:02.988035  960722 main.go:141] libmachine: (ha-105786) DBG |   
	I0314 18:18:02.988043  960722 main.go:141] libmachine: (ha-105786) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0314 18:18:02.988055  960722 main.go:141] libmachine: (ha-105786) DBG |     <dhcp>
	I0314 18:18:02.988062  960722 main.go:141] libmachine: (ha-105786) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0314 18:18:02.988067  960722 main.go:141] libmachine: (ha-105786) DBG |     </dhcp>
	I0314 18:18:02.988075  960722 main.go:141] libmachine: (ha-105786) DBG |   </ip>
	I0314 18:18:02.988079  960722 main.go:141] libmachine: (ha-105786) DBG |   
	I0314 18:18:02.988083  960722 main.go:141] libmachine: (ha-105786) DBG | </network>
	I0314 18:18:02.988091  960722 main.go:141] libmachine: (ha-105786) DBG | 
	I0314 18:18:02.993007  960722 main.go:141] libmachine: (ha-105786) DBG | trying to create private KVM network mk-ha-105786 192.168.39.0/24...
	I0314 18:18:03.062029  960722 main.go:141] libmachine: (ha-105786) DBG | private KVM network mk-ha-105786 192.168.39.0/24 created
	I0314 18:18:03.062064  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:03.062015  960745 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:18:03.062078  960722 main.go:141] libmachine: (ha-105786) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786 ...
	I0314 18:18:03.062114  960722 main.go:141] libmachine: (ha-105786) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 18:18:03.062221  960722 main.go:141] libmachine: (ha-105786) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:18:03.320056  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:03.319919  960745 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa...
	I0314 18:18:03.443945  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:03.443772  960745 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/ha-105786.rawdisk...
	I0314 18:18:03.443980  960722 main.go:141] libmachine: (ha-105786) DBG | Writing magic tar header
	I0314 18:18:03.443990  960722 main.go:141] libmachine: (ha-105786) DBG | Writing SSH key tar header
	I0314 18:18:03.443998  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:03.443904  960745 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786 ...
	I0314 18:18:03.444011  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786
	I0314 18:18:03.444072  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786 (perms=drwx------)
	I0314 18:18:03.444103  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 18:18:03.444115  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 18:18:03.444122  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:18:03.444138  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 18:18:03.444151  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 18:18:03.444166  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins
	I0314 18:18:03.444178  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home
	I0314 18:18:03.444188  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 18:18:03.444194  960722 main.go:141] libmachine: (ha-105786) DBG | Skipping /home - not owner
	I0314 18:18:03.444203  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 18:18:03.444231  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 18:18:03.444247  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 18:18:03.444261  960722 main.go:141] libmachine: (ha-105786) Creating domain...
	I0314 18:18:03.445522  960722 main.go:141] libmachine: (ha-105786) define libvirt domain using xml: 
	I0314 18:18:03.445544  960722 main.go:141] libmachine: (ha-105786) <domain type='kvm'>
	I0314 18:18:03.445550  960722 main.go:141] libmachine: (ha-105786)   <name>ha-105786</name>
	I0314 18:18:03.445555  960722 main.go:141] libmachine: (ha-105786)   <memory unit='MiB'>2200</memory>
	I0314 18:18:03.445563  960722 main.go:141] libmachine: (ha-105786)   <vcpu>2</vcpu>
	I0314 18:18:03.445569  960722 main.go:141] libmachine: (ha-105786)   <features>
	I0314 18:18:03.445580  960722 main.go:141] libmachine: (ha-105786)     <acpi/>
	I0314 18:18:03.445591  960722 main.go:141] libmachine: (ha-105786)     <apic/>
	I0314 18:18:03.445618  960722 main.go:141] libmachine: (ha-105786)     <pae/>
	I0314 18:18:03.445666  960722 main.go:141] libmachine: (ha-105786)     
	I0314 18:18:03.445680  960722 main.go:141] libmachine: (ha-105786)   </features>
	I0314 18:18:03.445689  960722 main.go:141] libmachine: (ha-105786)   <cpu mode='host-passthrough'>
	I0314 18:18:03.445701  960722 main.go:141] libmachine: (ha-105786)   
	I0314 18:18:03.445718  960722 main.go:141] libmachine: (ha-105786)   </cpu>
	I0314 18:18:03.445746  960722 main.go:141] libmachine: (ha-105786)   <os>
	I0314 18:18:03.445770  960722 main.go:141] libmachine: (ha-105786)     <type>hvm</type>
	I0314 18:18:03.445785  960722 main.go:141] libmachine: (ha-105786)     <boot dev='cdrom'/>
	I0314 18:18:03.445793  960722 main.go:141] libmachine: (ha-105786)     <boot dev='hd'/>
	I0314 18:18:03.445807  960722 main.go:141] libmachine: (ha-105786)     <bootmenu enable='no'/>
	I0314 18:18:03.445818  960722 main.go:141] libmachine: (ha-105786)   </os>
	I0314 18:18:03.445830  960722 main.go:141] libmachine: (ha-105786)   <devices>
	I0314 18:18:03.445841  960722 main.go:141] libmachine: (ha-105786)     <disk type='file' device='cdrom'>
	I0314 18:18:03.445858  960722 main.go:141] libmachine: (ha-105786)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/boot2docker.iso'/>
	I0314 18:18:03.445877  960722 main.go:141] libmachine: (ha-105786)       <target dev='hdc' bus='scsi'/>
	I0314 18:18:03.445906  960722 main.go:141] libmachine: (ha-105786)       <readonly/>
	I0314 18:18:03.445923  960722 main.go:141] libmachine: (ha-105786)     </disk>
	I0314 18:18:03.445937  960722 main.go:141] libmachine: (ha-105786)     <disk type='file' device='disk'>
	I0314 18:18:03.445959  960722 main.go:141] libmachine: (ha-105786)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 18:18:03.445972  960722 main.go:141] libmachine: (ha-105786)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/ha-105786.rawdisk'/>
	I0314 18:18:03.445980  960722 main.go:141] libmachine: (ha-105786)       <target dev='hda' bus='virtio'/>
	I0314 18:18:03.445986  960722 main.go:141] libmachine: (ha-105786)     </disk>
	I0314 18:18:03.445992  960722 main.go:141] libmachine: (ha-105786)     <interface type='network'>
	I0314 18:18:03.445998  960722 main.go:141] libmachine: (ha-105786)       <source network='mk-ha-105786'/>
	I0314 18:18:03.446005  960722 main.go:141] libmachine: (ha-105786)       <model type='virtio'/>
	I0314 18:18:03.446010  960722 main.go:141] libmachine: (ha-105786)     </interface>
	I0314 18:18:03.446019  960722 main.go:141] libmachine: (ha-105786)     <interface type='network'>
	I0314 18:18:03.446027  960722 main.go:141] libmachine: (ha-105786)       <source network='default'/>
	I0314 18:18:03.446032  960722 main.go:141] libmachine: (ha-105786)       <model type='virtio'/>
	I0314 18:18:03.446040  960722 main.go:141] libmachine: (ha-105786)     </interface>
	I0314 18:18:03.446047  960722 main.go:141] libmachine: (ha-105786)     <serial type='pty'>
	I0314 18:18:03.446052  960722 main.go:141] libmachine: (ha-105786)       <target port='0'/>
	I0314 18:18:03.446059  960722 main.go:141] libmachine: (ha-105786)     </serial>
	I0314 18:18:03.446064  960722 main.go:141] libmachine: (ha-105786)     <console type='pty'>
	I0314 18:18:03.446072  960722 main.go:141] libmachine: (ha-105786)       <target type='serial' port='0'/>
	I0314 18:18:03.446080  960722 main.go:141] libmachine: (ha-105786)     </console>
	I0314 18:18:03.446096  960722 main.go:141] libmachine: (ha-105786)     <rng model='virtio'>
	I0314 18:18:03.446105  960722 main.go:141] libmachine: (ha-105786)       <backend model='random'>/dev/random</backend>
	I0314 18:18:03.446112  960722 main.go:141] libmachine: (ha-105786)     </rng>
	I0314 18:18:03.446117  960722 main.go:141] libmachine: (ha-105786)     
	I0314 18:18:03.446123  960722 main.go:141] libmachine: (ha-105786)     
	I0314 18:18:03.446128  960722 main.go:141] libmachine: (ha-105786)   </devices>
	I0314 18:18:03.446134  960722 main.go:141] libmachine: (ha-105786) </domain>
	I0314 18:18:03.446149  960722 main.go:141] libmachine: (ha-105786) 
	I0314 18:18:03.450443  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:71:4d:d4 in network default
	I0314 18:18:03.451073  960722 main.go:141] libmachine: (ha-105786) Ensuring networks are active...
	I0314 18:18:03.451111  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:03.451694  960722 main.go:141] libmachine: (ha-105786) Ensuring network default is active
	I0314 18:18:03.452099  960722 main.go:141] libmachine: (ha-105786) Ensuring network mk-ha-105786 is active
	I0314 18:18:03.452690  960722 main.go:141] libmachine: (ha-105786) Getting domain xml...
	I0314 18:18:03.453518  960722 main.go:141] libmachine: (ha-105786) Creating domain...
	I0314 18:18:04.631641  960722 main.go:141] libmachine: (ha-105786) Waiting to get IP...
	I0314 18:18:04.632510  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:04.632971  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:04.633014  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:04.632960  960745 retry.go:31] will retry after 277.79723ms: waiting for machine to come up
	I0314 18:18:04.912574  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:04.913043  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:04.913072  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:04.912993  960745 retry.go:31] will retry after 255.423441ms: waiting for machine to come up
	I0314 18:18:05.170565  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:05.171048  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:05.171074  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:05.171001  960745 retry.go:31] will retry after 442.032708ms: waiting for machine to come up
	I0314 18:18:05.614258  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:05.614756  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:05.614790  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:05.614644  960745 retry.go:31] will retry after 569.414403ms: waiting for machine to come up
	I0314 18:18:06.185359  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:06.185838  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:06.185871  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:06.185829  960745 retry.go:31] will retry after 718.712718ms: waiting for machine to come up
	I0314 18:18:06.906730  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:06.907546  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:06.907569  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:06.907509  960745 retry.go:31] will retry after 573.35881ms: waiting for machine to come up
	I0314 18:18:07.481989  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:07.482332  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:07.482649  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:07.482303  960745 retry.go:31] will retry after 978.743717ms: waiting for machine to come up
	I0314 18:18:08.462336  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:08.462863  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:08.462896  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:08.462796  960745 retry.go:31] will retry after 1.071065961s: waiting for machine to come up
	I0314 18:18:09.535145  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:09.535547  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:09.535575  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:09.535484  960745 retry.go:31] will retry after 1.510895728s: waiting for machine to come up
	I0314 18:18:11.048495  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:11.048970  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:11.049003  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:11.048904  960745 retry.go:31] will retry after 1.947807983s: waiting for machine to come up
	I0314 18:18:12.998012  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:12.998404  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:12.998435  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:12.998372  960745 retry.go:31] will retry after 2.168107958s: waiting for machine to come up
	I0314 18:18:15.169746  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:15.170086  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:15.170118  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:15.170046  960745 retry.go:31] will retry after 2.38476079s: waiting for machine to come up
	I0314 18:18:17.557544  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:17.557911  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:17.557936  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:17.557857  960745 retry.go:31] will retry after 3.672710927s: waiting for machine to come up
	I0314 18:18:21.234171  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:21.234560  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:21.234588  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:21.234513  960745 retry.go:31] will retry after 4.998566272s: waiting for machine to come up
	I0314 18:18:26.237299  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.237759  960722 main.go:141] libmachine: (ha-105786) Found IP for machine: 192.168.39.170
	I0314 18:18:26.237781  960722 main.go:141] libmachine: (ha-105786) Reserving static IP address...
	I0314 18:18:26.237803  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has current primary IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.238321  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find host DHCP lease matching {name: "ha-105786", mac: "52:54:00:87:0a:bd", ip: "192.168.39.170"} in network mk-ha-105786
	I0314 18:18:26.314639  960722 main.go:141] libmachine: (ha-105786) DBG | Getting to WaitForSSH function...
	I0314 18:18:26.314675  960722 main.go:141] libmachine: (ha-105786) Reserved static IP address: 192.168.39.170
	I0314 18:18:26.314688  960722 main.go:141] libmachine: (ha-105786) Waiting for SSH to be available...
	I0314 18:18:26.317402  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.317776  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.317800  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.317964  960722 main.go:141] libmachine: (ha-105786) DBG | Using SSH client type: external
	I0314 18:18:26.317986  960722 main.go:141] libmachine: (ha-105786) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa (-rw-------)
	I0314 18:18:26.318017  960722 main.go:141] libmachine: (ha-105786) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 18:18:26.318032  960722 main.go:141] libmachine: (ha-105786) DBG | About to run SSH command:
	I0314 18:18:26.318045  960722 main.go:141] libmachine: (ha-105786) DBG | exit 0
	I0314 18:18:26.444372  960722 main.go:141] libmachine: (ha-105786) DBG | SSH cmd err, output: <nil>: 
	I0314 18:18:26.444671  960722 main.go:141] libmachine: (ha-105786) KVM machine creation complete!
	I0314 18:18:26.445014  960722 main.go:141] libmachine: (ha-105786) Calling .GetConfigRaw
	I0314 18:18:26.445659  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:26.445873  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:26.446084  960722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 18:18:26.446107  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:18:26.447559  960722 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 18:18:26.447574  960722 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 18:18:26.447581  960722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 18:18:26.447590  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:26.450086  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.450653  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.450681  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.450848  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:26.451046  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.451218  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.451347  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:26.451543  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:26.451787  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:26.451800  960722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 18:18:26.551832  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:18:26.551858  960722 main.go:141] libmachine: Detecting the provisioner...
	I0314 18:18:26.551867  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:26.554716  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.555203  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.555235  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.555328  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:26.555544  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.555754  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.555907  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:26.556127  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:26.556343  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:26.556356  960722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 18:18:26.657296  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 18:18:26.657397  960722 main.go:141] libmachine: found compatible host: buildroot
	I0314 18:18:26.657408  960722 main.go:141] libmachine: Provisioning with buildroot...
	I0314 18:18:26.657415  960722 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:18:26.657659  960722 buildroot.go:166] provisioning hostname "ha-105786"
	I0314 18:18:26.657682  960722 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:18:26.657839  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:26.660413  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.660745  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.660769  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.660865  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:26.661051  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.661216  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.661422  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:26.661574  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:26.661786  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:26.661802  960722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-105786 && echo "ha-105786" | sudo tee /etc/hostname
	I0314 18:18:26.776346  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786
	
	I0314 18:18:26.776384  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:26.778907  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.779350  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.779381  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.779528  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:26.779722  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.779925  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.780055  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:26.780228  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:26.780407  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:26.780423  960722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-105786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-105786/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-105786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:18:26.889916  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:18:26.889950  960722 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:18:26.889975  960722 buildroot.go:174] setting up certificates
	I0314 18:18:26.889992  960722 provision.go:84] configureAuth start
	I0314 18:18:26.890012  960722 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:18:26.890358  960722 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:18:26.893298  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.893677  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.893707  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.893884  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:26.896046  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.896322  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.896357  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.896481  960722 provision.go:143] copyHostCerts
	I0314 18:18:26.896522  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:18:26.896569  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 18:18:26.896579  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:18:26.896652  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:18:26.896739  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:18:26.896759  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 18:18:26.896767  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:18:26.896795  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:18:26.896844  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:18:26.896862  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 18:18:26.896870  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:18:26.896893  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:18:26.896988  960722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.ha-105786 san=[127.0.0.1 192.168.39.170 ha-105786 localhost minikube]
	I0314 18:18:27.042804  960722 provision.go:177] copyRemoteCerts
	I0314 18:18:27.042869  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:18:27.042897  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.045808  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.046142  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.046181  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.046360  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.046567  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.046761  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.046936  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:18:27.131474  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 18:18:27.131542  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 18:18:27.162593  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 18:18:27.162658  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:18:27.195241  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 18:18:27.195311  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0314 18:18:27.225509  960722 provision.go:87] duration metric: took 335.494563ms to configureAuth
	I0314 18:18:27.225538  960722 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:18:27.225717  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:18:27.225826  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.228516  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.228951  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.228975  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.229219  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.229452  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.229613  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.229733  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.229889  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:27.230076  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:27.230097  960722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:18:27.503688  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:18:27.503721  960722 main.go:141] libmachine: Checking connection to Docker...
	I0314 18:18:27.503740  960722 main.go:141] libmachine: (ha-105786) Calling .GetURL
	I0314 18:18:27.505347  960722 main.go:141] libmachine: (ha-105786) DBG | Using libvirt version 6000000
	I0314 18:18:27.507719  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.508007  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.508030  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.508259  960722 main.go:141] libmachine: Docker is up and running!
	I0314 18:18:27.508276  960722 main.go:141] libmachine: Reticulating splines...
	I0314 18:18:27.508285  960722 client.go:171] duration metric: took 24.523342142s to LocalClient.Create
	I0314 18:18:27.508310  960722 start.go:167] duration metric: took 24.523398961s to libmachine.API.Create "ha-105786"
	I0314 18:18:27.508324  960722 start.go:293] postStartSetup for "ha-105786" (driver="kvm2")
	I0314 18:18:27.508340  960722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:18:27.508363  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:27.508606  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:18:27.508624  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.510740  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.511066  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.511096  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.511296  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.511490  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.511689  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.511843  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:18:27.599488  960722 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:18:27.604015  960722 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:18:27.604038  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:18:27.604098  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:18:27.604193  960722 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 18:18:27.604207  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /etc/ssl/certs/9513112.pem
	I0314 18:18:27.604347  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:18:27.615221  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:18:27.640154  960722 start.go:296] duration metric: took 131.816012ms for postStartSetup
	I0314 18:18:27.640200  960722 main.go:141] libmachine: (ha-105786) Calling .GetConfigRaw
	I0314 18:18:27.640800  960722 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:18:27.643784  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.644155  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.644177  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.644466  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:18:27.644631  960722 start.go:128] duration metric: took 24.677595105s to createHost
	I0314 18:18:27.644656  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.646948  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.647438  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.647466  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.647632  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.647802  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.647978  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.648129  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.648350  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:27.648516  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:27.648534  960722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:18:27.749277  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440307.721502673
	
	I0314 18:18:27.749306  960722 fix.go:216] guest clock: 1710440307.721502673
	I0314 18:18:27.749317  960722 fix.go:229] Guest: 2024-03-14 18:18:27.721502673 +0000 UTC Remote: 2024-03-14 18:18:27.644643708 +0000 UTC m=+24.798488720 (delta=76.858965ms)
	I0314 18:18:27.749337  960722 fix.go:200] guest clock delta is within tolerance: 76.858965ms
	I0314 18:18:27.749343  960722 start.go:83] releasing machines lock for "ha-105786", held for 24.78237756s
	I0314 18:18:27.749363  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:27.749665  960722 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:18:27.752365  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.752715  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.752752  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.752902  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:27.753381  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:27.753570  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:27.753681  960722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:18:27.753727  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.753861  960722 ssh_runner.go:195] Run: cat /version.json
	I0314 18:18:27.753888  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.756457  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.756748  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.756783  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.756802  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.756899  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.757070  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.757179  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.757198  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.757223  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.757418  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:18:27.757432  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.757616  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.757775  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.757918  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:18:27.857480  960722 ssh_runner.go:195] Run: systemctl --version
	I0314 18:18:27.863499  960722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:18:28.026089  960722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:18:28.032954  960722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:18:28.033024  960722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:18:28.051341  960722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:18:28.051369  960722 start.go:494] detecting cgroup driver to use...
	I0314 18:18:28.051449  960722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:18:28.068602  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:18:28.083502  960722 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:18:28.083557  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:18:28.097189  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:18:28.110819  960722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:18:28.223881  960722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:18:28.371695  960722 docker.go:233] disabling docker service ...
	I0314 18:18:28.371781  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:18:28.386496  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:18:28.399599  960722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:18:28.528120  960722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:18:28.664621  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:18:28.678995  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:18:28.698960  960722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:18:28.699033  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:18:28.710540  960722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:18:28.710614  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:18:28.721780  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:18:28.732859  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:18:28.743777  960722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:18:28.755894  960722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:18:28.765722  960722 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 18:18:28.765767  960722 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 18:18:28.780136  960722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:18:28.789860  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:28.928565  960722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:18:29.066170  960722 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:18:29.066257  960722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:18:29.071920  960722 start.go:562] Will wait 60s for crictl version
	I0314 18:18:29.071968  960722 ssh_runner.go:195] Run: which crictl
	I0314 18:18:29.076359  960722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:18:29.122746  960722 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:18:29.122830  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:18:29.154125  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:18:29.186433  960722 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:18:29.187711  960722 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:18:29.190440  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:29.190762  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:29.190798  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:29.190991  960722 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:18:29.195470  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:18:29.209255  960722 kubeadm.go:877] updating cluster {Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:18:29.209404  960722 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:18:29.209461  960722 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:18:29.244992  960722 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 18:18:29.245061  960722 ssh_runner.go:195] Run: which lz4
	I0314 18:18:29.249342  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0314 18:18:29.249447  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 18:18:29.254169  960722 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 18:18:29.254197  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 18:18:31.084689  960722 crio.go:444] duration metric: took 1.835272399s to copy over tarball
	I0314 18:18:31.084777  960722 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 18:18:33.846290  960722 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.761478036s)
	I0314 18:18:33.846321  960722 crio.go:451] duration metric: took 2.761602368s to extract the tarball
	I0314 18:18:33.846328  960722 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 18:18:33.888938  960722 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:18:33.938435  960722 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:18:33.938464  960722 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:18:33.938474  960722 kubeadm.go:928] updating node { 192.168.39.170 8443 v1.28.4 crio true true} ...
	I0314 18:18:33.938623  960722 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-105786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:18:33.938711  960722 ssh_runner.go:195] Run: crio config
	I0314 18:18:34.006442  960722 cni.go:84] Creating CNI manager for ""
	I0314 18:18:34.006465  960722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 18:18:34.006477  960722 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:18:34.006504  960722 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-105786 NodeName:ha-105786 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:18:34.006632  960722 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-105786"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:18:34.006656  960722 kube-vip.go:105] generating kube-vip config ...
	I0314 18:18:34.006714  960722 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:18:34.006760  960722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:18:34.020479  960722 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:18:34.020550  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0314 18:18:34.035690  960722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0314 18:18:34.058819  960722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:18:34.081190  960722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0314 18:18:34.104015  960722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0314 18:18:34.122637  960722 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:18:34.127020  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:18:34.140470  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:34.273560  960722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:18:34.293533  960722 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786 for IP: 192.168.39.170
	I0314 18:18:34.293561  960722 certs.go:194] generating shared ca certs ...
	I0314 18:18:34.293579  960722 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.293771  960722 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:18:34.293832  960722 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:18:34.293846  960722 certs.go:256] generating profile certs ...
	I0314 18:18:34.293907  960722 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key
	I0314 18:18:34.293926  960722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt with IP's: []
	I0314 18:18:34.363624  960722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt ...
	I0314 18:18:34.363656  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt: {Name:mk521f0de305a43ea283b038c3d788bb59bfde56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.363858  960722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key ...
	I0314 18:18:34.363873  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key: {Name:mk1a0bd182fa9492a498d0d5b485dad4277d90a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.363978  960722 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5a8f449c
	I0314 18:18:34.364000  960722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5a8f449c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170 192.168.39.254]
	I0314 18:18:34.461729  960722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5a8f449c ...
	I0314 18:18:34.461761  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5a8f449c: {Name:mk50c098325f62ce81d89b9e8c1f3ec90e4bf90a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.461954  960722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5a8f449c ...
	I0314 18:18:34.461977  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5a8f449c: {Name:mk8aac17b0cfef91c6789f6d8dae3cb7806fcdd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.462078  960722 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5a8f449c -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt
	I0314 18:18:34.462205  960722 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5a8f449c -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key
	I0314 18:18:34.462288  960722 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key
	I0314 18:18:34.462315  960722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt with IP's: []
	I0314 18:18:34.610899  960722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt ...
	I0314 18:18:34.610932  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt: {Name:mk94ac1976010d6f666bb6ae031e119703a2dfaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.611102  960722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key ...
	I0314 18:18:34.611114  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key: {Name:mkd90e890f5b0c8090e1d58015e8b16a4114332c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.611187  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:18:34.611219  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:18:34.611239  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:18:34.611252  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:18:34.611265  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:18:34.611278  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:18:34.611290  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:18:34.611302  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:18:34.611347  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 18:18:34.611389  960722 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 18:18:34.611400  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:18:34.611421  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:18:34.611451  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:18:34.611479  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:18:34.611514  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:18:34.611548  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /usr/share/ca-certificates/9513112.pem
	I0314 18:18:34.611567  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:18:34.611579  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem -> /usr/share/ca-certificates/951311.pem
	I0314 18:18:34.612246  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:18:34.641651  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:18:34.669535  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:18:34.697069  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:18:34.724855  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 18:18:34.752813  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 18:18:34.780891  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:18:34.807976  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 18:18:34.835847  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 18:18:34.869677  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:18:34.896834  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 18:18:34.924111  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:18:34.942480  960722 ssh_runner.go:195] Run: openssl version
	I0314 18:18:34.948686  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 18:18:34.960274  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 18:18:34.965262  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:18:34.965314  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 18:18:34.971625  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:18:34.983752  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:18:34.995538  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:18:35.000530  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:18:35.000586  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:18:35.007110  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:18:35.018842  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 18:18:35.030720  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 18:18:35.035627  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:18:35.035682  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 18:18:35.041936  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 18:18:35.053705  960722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:18:35.058345  960722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:18:35.058405  960722 kubeadm.go:391] StartCluster: {Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:18:35.058506  960722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 18:18:35.058576  960722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 18:18:35.104718  960722 cri.go:89] found id: ""
	I0314 18:18:35.104799  960722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 18:18:35.123713  960722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 18:18:35.145008  960722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 18:18:35.169536  960722 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 18:18:35.169584  960722 kubeadm.go:156] found existing configuration files:
	
	I0314 18:18:35.169661  960722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 18:18:35.187064  960722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 18:18:35.187175  960722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 18:18:35.209160  960722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 18:18:35.219891  960722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 18:18:35.219949  960722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 18:18:35.230159  960722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 18:18:35.240035  960722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 18:18:35.240101  960722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 18:18:35.250402  960722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 18:18:35.260041  960722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 18:18:35.260099  960722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 18:18:35.270606  960722 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 18:18:35.527249  960722 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 18:18:50.279972  960722 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 18:18:50.280080  960722 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 18:18:50.280159  960722 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 18:18:50.280308  960722 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 18:18:50.280433  960722 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 18:18:50.280524  960722 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 18:18:50.282142  960722 out.go:204]   - Generating certificates and keys ...
	I0314 18:18:50.282262  960722 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 18:18:50.282340  960722 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 18:18:50.282461  960722 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 18:18:50.282556  960722 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 18:18:50.282659  960722 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 18:18:50.282711  960722 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 18:18:50.282771  960722 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 18:18:50.282909  960722 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-105786 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0314 18:18:50.282985  960722 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 18:18:50.283149  960722 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-105786 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0314 18:18:50.283238  960722 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 18:18:50.283333  960722 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 18:18:50.283399  960722 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 18:18:50.283483  960722 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 18:18:50.283563  960722 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 18:18:50.283621  960722 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 18:18:50.283696  960722 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 18:18:50.283777  960722 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 18:18:50.283917  960722 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 18:18:50.284008  960722 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 18:18:50.285632  960722 out.go:204]   - Booting up control plane ...
	I0314 18:18:50.285748  960722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 18:18:50.285838  960722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 18:18:50.285942  960722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 18:18:50.286072  960722 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 18:18:50.286210  960722 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 18:18:50.286282  960722 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 18:18:50.286480  960722 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 18:18:50.286585  960722 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.627994 seconds
	I0314 18:18:50.286738  960722 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 18:18:50.286933  960722 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 18:18:50.287020  960722 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 18:18:50.287251  960722 kubeadm.go:309] [mark-control-plane] Marking the node ha-105786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 18:18:50.287349  960722 kubeadm.go:309] [bootstrap-token] Using token: 7tm0k5.0klpcf5r6yb9tlsb
	I0314 18:18:50.288682  960722 out.go:204]   - Configuring RBAC rules ...
	I0314 18:18:50.288797  960722 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 18:18:50.288875  960722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 18:18:50.289023  960722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 18:18:50.289205  960722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 18:18:50.289385  960722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 18:18:50.289496  960722 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 18:18:50.289624  960722 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 18:18:50.289682  960722 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 18:18:50.289750  960722 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 18:18:50.289763  960722 kubeadm.go:309] 
	I0314 18:18:50.289846  960722 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 18:18:50.289855  960722 kubeadm.go:309] 
	I0314 18:18:50.289933  960722 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 18:18:50.289943  960722 kubeadm.go:309] 
	I0314 18:18:50.289976  960722 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 18:18:50.290054  960722 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 18:18:50.290130  960722 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 18:18:50.290139  960722 kubeadm.go:309] 
	I0314 18:18:50.290190  960722 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 18:18:50.290199  960722 kubeadm.go:309] 
	I0314 18:18:50.290295  960722 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 18:18:50.290313  960722 kubeadm.go:309] 
	I0314 18:18:50.290372  960722 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 18:18:50.290437  960722 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 18:18:50.290501  960722 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 18:18:50.290510  960722 kubeadm.go:309] 
	I0314 18:18:50.290594  960722 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 18:18:50.290660  960722 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 18:18:50.290666  960722 kubeadm.go:309] 
	I0314 18:18:50.290733  960722 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7tm0k5.0klpcf5r6yb9tlsb \
	I0314 18:18:50.290855  960722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 18:18:50.290901  960722 kubeadm.go:309] 	--control-plane 
	I0314 18:18:50.290911  960722 kubeadm.go:309] 
	I0314 18:18:50.291003  960722 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 18:18:50.291017  960722 kubeadm.go:309] 
	I0314 18:18:50.291121  960722 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7tm0k5.0klpcf5r6yb9tlsb \
	I0314 18:18:50.291245  960722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 18:18:50.291281  960722 cni.go:84] Creating CNI manager for ""
	I0314 18:18:50.291294  960722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 18:18:50.292938  960722 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0314 18:18:50.294349  960722 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0314 18:18:50.315278  960722 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0314 18:18:50.315302  960722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0314 18:18:50.353422  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0314 18:18:51.467661  960722 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.114181908s)
	I0314 18:18:51.467713  960722 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 18:18:51.467838  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:51.467871  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-105786 minikube.k8s.io/updated_at=2024_03_14T18_18_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-105786 minikube.k8s.io/primary=true
	I0314 18:18:51.514442  960722 ops.go:34] apiserver oom_adj: -16
	I0314 18:18:51.681859  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:52.182201  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:52.682373  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:53.182046  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:53.682159  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:54.181904  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:54.682343  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:55.182245  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:55.681992  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:56.182155  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:56.682005  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:57.182337  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:57.682594  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:58.182384  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:58.682911  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:59.182017  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:59.682641  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:00.182000  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:00.682494  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:00.831620  960722 kubeadm.go:1106] duration metric: took 9.363874212s to wait for elevateKubeSystemPrivileges
	W0314 18:19:00.831664  960722 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 18:19:00.831672  960722 kubeadm.go:393] duration metric: took 25.773272774s to StartCluster
	I0314 18:19:00.831692  960722 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:00.831768  960722 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:19:00.832530  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:00.832732  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 18:19:00.832741  960722 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:19:00.832765  960722 start.go:240] waiting for startup goroutines ...
	I0314 18:19:00.832776  960722 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 18:19:00.832908  960722 addons.go:69] Setting storage-provisioner=true in profile "ha-105786"
	I0314 18:19:00.832935  960722 addons.go:69] Setting default-storageclass=true in profile "ha-105786"
	I0314 18:19:00.832981  960722 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-105786"
	I0314 18:19:00.833010  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:19:00.832943  960722 addons.go:234] Setting addon storage-provisioner=true in "ha-105786"
	I0314 18:19:00.833076  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:19:00.833451  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:00.833474  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:00.833486  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:00.833497  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:00.849486  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0314 18:19:00.849523  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0314 18:19:00.849987  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:00.850007  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:00.850537  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:00.850557  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:00.850577  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:00.850598  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:00.850941  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:00.850971  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:00.851155  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:19:00.851489  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:00.851519  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:00.853457  960722 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:19:00.853818  960722 kapi.go:59] client config for ha-105786: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key", CAFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 18:19:00.854363  960722 cert_rotation.go:137] Starting client certificate rotation controller
	I0314 18:19:00.854579  960722 addons.go:234] Setting addon default-storageclass=true in "ha-105786"
	I0314 18:19:00.854626  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:19:00.854930  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:00.854953  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:00.866778  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36745
	I0314 18:19:00.867222  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:00.867721  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:00.867744  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:00.868150  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:00.868365  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:19:00.868987  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
	I0314 18:19:00.869450  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:00.869853  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:00.869875  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:00.870261  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:00.870431  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:19:00.872807  960722 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 18:19:00.870938  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:00.872845  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:00.874666  960722 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:19:00.874697  960722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 18:19:00.874717  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:19:00.877611  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:00.878117  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:19:00.878149  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:00.878392  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:19:00.878573  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:19:00.878724  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:19:00.878845  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:19:00.888259  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0314 18:19:00.888656  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:00.889221  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:00.889247  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:00.889601  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:00.889812  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:19:00.891343  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:19:00.891635  960722 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 18:19:00.891656  960722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 18:19:00.891682  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:19:00.894534  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:00.894921  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:19:00.894950  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:00.895115  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:19:00.895296  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:19:00.895478  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:19:00.895632  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:19:00.939242  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 18:19:01.006796  960722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:19:01.049695  960722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 18:19:01.373612  960722 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0314 18:19:01.685782  960722 main.go:141] libmachine: Making call to close driver server
	I0314 18:19:01.685814  960722 main.go:141] libmachine: (ha-105786) Calling .Close
	I0314 18:19:01.685792  960722 main.go:141] libmachine: Making call to close driver server
	I0314 18:19:01.685886  960722 main.go:141] libmachine: (ha-105786) Calling .Close
	I0314 18:19:01.686174  960722 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:19:01.686197  960722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:19:01.686207  960722 main.go:141] libmachine: Making call to close driver server
	I0314 18:19:01.686214  960722 main.go:141] libmachine: (ha-105786) Calling .Close
	I0314 18:19:01.686220  960722 main.go:141] libmachine: (ha-105786) DBG | Closing plugin on server side
	I0314 18:19:01.686174  960722 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:19:01.686263  960722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:19:01.686284  960722 main.go:141] libmachine: Making call to close driver server
	I0314 18:19:01.686295  960722 main.go:141] libmachine: (ha-105786) Calling .Close
	I0314 18:19:01.686433  960722 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:19:01.686449  960722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:19:01.686449  960722 main.go:141] libmachine: (ha-105786) DBG | Closing plugin on server side
	I0314 18:19:01.686530  960722 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:19:01.686545  960722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:19:01.686594  960722 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0314 18:19:01.686608  960722 round_trippers.go:469] Request Headers:
	I0314 18:19:01.686619  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:19:01.686626  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:19:01.697189  960722 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 18:19:01.697945  960722 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0314 18:19:01.697962  960722 round_trippers.go:469] Request Headers:
	I0314 18:19:01.697970  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:19:01.697973  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:19:01.697977  960722 round_trippers.go:473]     Content-Type: application/json
	I0314 18:19:01.700816  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:19:01.701321  960722 main.go:141] libmachine: Making call to close driver server
	I0314 18:19:01.701335  960722 main.go:141] libmachine: (ha-105786) Calling .Close
	I0314 18:19:01.701571  960722 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:19:01.701592  960722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:19:01.701613  960722 main.go:141] libmachine: (ha-105786) DBG | Closing plugin on server side
	I0314 18:19:01.703278  960722 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0314 18:19:01.704554  960722 addons.go:505] duration metric: took 871.776369ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0314 18:19:01.704593  960722 start.go:245] waiting for cluster config update ...
	I0314 18:19:01.704618  960722 start.go:254] writing updated cluster config ...
	I0314 18:19:01.706213  960722 out.go:177] 
	I0314 18:19:01.707745  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:19:01.707826  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:19:01.709513  960722 out.go:177] * Starting "ha-105786-m02" control-plane node in "ha-105786" cluster
	I0314 18:19:01.710629  960722 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:19:01.710662  960722 cache.go:56] Caching tarball of preloaded images
	I0314 18:19:01.710741  960722 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:19:01.710754  960722 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:19:01.710830  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:19:01.710990  960722 start.go:360] acquireMachinesLock for ha-105786-m02: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:19:01.711040  960722 start.go:364] duration metric: took 28.593µs to acquireMachinesLock for "ha-105786-m02"
	I0314 18:19:01.711064  960722 start.go:93] Provisioning new machine with config: &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:19:01.711163  960722 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0314 18:19:01.712664  960722 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:19:01.712752  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:01.712781  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:01.728804  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0314 18:19:01.729285  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:01.729841  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:01.729864  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:01.730219  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:01.730479  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetMachineName
	I0314 18:19:01.730650  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:01.730801  960722 start.go:159] libmachine.API.Create for "ha-105786" (driver="kvm2")
	I0314 18:19:01.730841  960722 client.go:168] LocalClient.Create starting
	I0314 18:19:01.730880  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 18:19:01.730922  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:19:01.730944  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:19:01.731013  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 18:19:01.731040  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:19:01.731056  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:19:01.731080  960722 main.go:141] libmachine: Running pre-create checks...
	I0314 18:19:01.731092  960722 main.go:141] libmachine: (ha-105786-m02) Calling .PreCreateCheck
	I0314 18:19:01.731262  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetConfigRaw
	I0314 18:19:01.731696  960722 main.go:141] libmachine: Creating machine...
	I0314 18:19:01.731712  960722 main.go:141] libmachine: (ha-105786-m02) Calling .Create
	I0314 18:19:01.731862  960722 main.go:141] libmachine: (ha-105786-m02) Creating KVM machine...
	I0314 18:19:01.733198  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found existing default KVM network
	I0314 18:19:01.733304  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found existing private KVM network mk-ha-105786
	I0314 18:19:01.733431  960722 main.go:141] libmachine: (ha-105786-m02) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02 ...
	I0314 18:19:01.733460  960722 main.go:141] libmachine: (ha-105786-m02) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 18:19:01.733506  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:01.733408  961068 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:19:01.733639  960722 main.go:141] libmachine: (ha-105786-m02) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:19:01.999912  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:01.999756  961068 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa...
	I0314 18:19:02.186279  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:02.186121  961068 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/ha-105786-m02.rawdisk...
	I0314 18:19:02.186316  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Writing magic tar header
	I0314 18:19:02.186329  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Writing SSH key tar header
	I0314 18:19:02.186341  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:02.186299  961068 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02 ...
	I0314 18:19:02.186468  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02
	I0314 18:19:02.186489  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 18:19:02.186503  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02 (perms=drwx------)
	I0314 18:19:02.186512  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:19:02.186520  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 18:19:02.186529  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 18:19:02.186537  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 18:19:02.186551  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 18:19:02.186565  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 18:19:02.186579  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 18:19:02.186616  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 18:19:02.186644  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins
	I0314 18:19:02.186654  960722 main.go:141] libmachine: (ha-105786-m02) Creating domain...
	I0314 18:19:02.186673  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home
	I0314 18:19:02.186685  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Skipping /home - not owner
	I0314 18:19:02.187570  960722 main.go:141] libmachine: (ha-105786-m02) define libvirt domain using xml: 
	I0314 18:19:02.187596  960722 main.go:141] libmachine: (ha-105786-m02) <domain type='kvm'>
	I0314 18:19:02.187606  960722 main.go:141] libmachine: (ha-105786-m02)   <name>ha-105786-m02</name>
	I0314 18:19:02.187619  960722 main.go:141] libmachine: (ha-105786-m02)   <memory unit='MiB'>2200</memory>
	I0314 18:19:02.187640  960722 main.go:141] libmachine: (ha-105786-m02)   <vcpu>2</vcpu>
	I0314 18:19:02.187648  960722 main.go:141] libmachine: (ha-105786-m02)   <features>
	I0314 18:19:02.187653  960722 main.go:141] libmachine: (ha-105786-m02)     <acpi/>
	I0314 18:19:02.187659  960722 main.go:141] libmachine: (ha-105786-m02)     <apic/>
	I0314 18:19:02.187665  960722 main.go:141] libmachine: (ha-105786-m02)     <pae/>
	I0314 18:19:02.187671  960722 main.go:141] libmachine: (ha-105786-m02)     
	I0314 18:19:02.187676  960722 main.go:141] libmachine: (ha-105786-m02)   </features>
	I0314 18:19:02.187682  960722 main.go:141] libmachine: (ha-105786-m02)   <cpu mode='host-passthrough'>
	I0314 18:19:02.187690  960722 main.go:141] libmachine: (ha-105786-m02)   
	I0314 18:19:02.187697  960722 main.go:141] libmachine: (ha-105786-m02)   </cpu>
	I0314 18:19:02.187729  960722 main.go:141] libmachine: (ha-105786-m02)   <os>
	I0314 18:19:02.187753  960722 main.go:141] libmachine: (ha-105786-m02)     <type>hvm</type>
	I0314 18:19:02.187763  960722 main.go:141] libmachine: (ha-105786-m02)     <boot dev='cdrom'/>
	I0314 18:19:02.187773  960722 main.go:141] libmachine: (ha-105786-m02)     <boot dev='hd'/>
	I0314 18:19:02.187781  960722 main.go:141] libmachine: (ha-105786-m02)     <bootmenu enable='no'/>
	I0314 18:19:02.187791  960722 main.go:141] libmachine: (ha-105786-m02)   </os>
	I0314 18:19:02.187799  960722 main.go:141] libmachine: (ha-105786-m02)   <devices>
	I0314 18:19:02.187810  960722 main.go:141] libmachine: (ha-105786-m02)     <disk type='file' device='cdrom'>
	I0314 18:19:02.187833  960722 main.go:141] libmachine: (ha-105786-m02)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/boot2docker.iso'/>
	I0314 18:19:02.187855  960722 main.go:141] libmachine: (ha-105786-m02)       <target dev='hdc' bus='scsi'/>
	I0314 18:19:02.187869  960722 main.go:141] libmachine: (ha-105786-m02)       <readonly/>
	I0314 18:19:02.187876  960722 main.go:141] libmachine: (ha-105786-m02)     </disk>
	I0314 18:19:02.187898  960722 main.go:141] libmachine: (ha-105786-m02)     <disk type='file' device='disk'>
	I0314 18:19:02.187907  960722 main.go:141] libmachine: (ha-105786-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 18:19:02.187919  960722 main.go:141] libmachine: (ha-105786-m02)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/ha-105786-m02.rawdisk'/>
	I0314 18:19:02.187931  960722 main.go:141] libmachine: (ha-105786-m02)       <target dev='hda' bus='virtio'/>
	I0314 18:19:02.187944  960722 main.go:141] libmachine: (ha-105786-m02)     </disk>
	I0314 18:19:02.187961  960722 main.go:141] libmachine: (ha-105786-m02)     <interface type='network'>
	I0314 18:19:02.187982  960722 main.go:141] libmachine: (ha-105786-m02)       <source network='mk-ha-105786'/>
	I0314 18:19:02.187993  960722 main.go:141] libmachine: (ha-105786-m02)       <model type='virtio'/>
	I0314 18:19:02.188002  960722 main.go:141] libmachine: (ha-105786-m02)     </interface>
	I0314 18:19:02.188009  960722 main.go:141] libmachine: (ha-105786-m02)     <interface type='network'>
	I0314 18:19:02.188015  960722 main.go:141] libmachine: (ha-105786-m02)       <source network='default'/>
	I0314 18:19:02.188022  960722 main.go:141] libmachine: (ha-105786-m02)       <model type='virtio'/>
	I0314 18:19:02.188027  960722 main.go:141] libmachine: (ha-105786-m02)     </interface>
	I0314 18:19:02.188037  960722 main.go:141] libmachine: (ha-105786-m02)     <serial type='pty'>
	I0314 18:19:02.188054  960722 main.go:141] libmachine: (ha-105786-m02)       <target port='0'/>
	I0314 18:19:02.188071  960722 main.go:141] libmachine: (ha-105786-m02)     </serial>
	I0314 18:19:02.188080  960722 main.go:141] libmachine: (ha-105786-m02)     <console type='pty'>
	I0314 18:19:02.188091  960722 main.go:141] libmachine: (ha-105786-m02)       <target type='serial' port='0'/>
	I0314 18:19:02.188107  960722 main.go:141] libmachine: (ha-105786-m02)     </console>
	I0314 18:19:02.188118  960722 main.go:141] libmachine: (ha-105786-m02)     <rng model='virtio'>
	I0314 18:19:02.188129  960722 main.go:141] libmachine: (ha-105786-m02)       <backend model='random'>/dev/random</backend>
	I0314 18:19:02.188151  960722 main.go:141] libmachine: (ha-105786-m02)     </rng>
	I0314 18:19:02.188162  960722 main.go:141] libmachine: (ha-105786-m02)     
	I0314 18:19:02.188172  960722 main.go:141] libmachine: (ha-105786-m02)     
	I0314 18:19:02.188180  960722 main.go:141] libmachine: (ha-105786-m02)   </devices>
	I0314 18:19:02.188190  960722 main.go:141] libmachine: (ha-105786-m02) </domain>
	I0314 18:19:02.188200  960722 main.go:141] libmachine: (ha-105786-m02) 
	I0314 18:19:02.195479  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:3c:23:3b in network default
	I0314 18:19:02.196233  960722 main.go:141] libmachine: (ha-105786-m02) Ensuring networks are active...
	I0314 18:19:02.196255  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:02.197157  960722 main.go:141] libmachine: (ha-105786-m02) Ensuring network default is active
	I0314 18:19:02.197489  960722 main.go:141] libmachine: (ha-105786-m02) Ensuring network mk-ha-105786 is active
	I0314 18:19:02.197915  960722 main.go:141] libmachine: (ha-105786-m02) Getting domain xml...
	I0314 18:19:02.198754  960722 main.go:141] libmachine: (ha-105786-m02) Creating domain...
	I0314 18:19:03.462883  960722 main.go:141] libmachine: (ha-105786-m02) Waiting to get IP...
	I0314 18:19:03.464044  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:03.464497  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:03.464530  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:03.464461  961068 retry.go:31] will retry after 187.92215ms: waiting for machine to come up
	I0314 18:19:03.654271  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:03.654865  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:03.654891  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:03.654817  961068 retry.go:31] will retry after 341.857787ms: waiting for machine to come up
	I0314 18:19:03.998431  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:03.999032  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:03.999061  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:03.998976  961068 retry.go:31] will retry after 400.056291ms: waiting for machine to come up
	I0314 18:19:04.400712  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:04.401264  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:04.401300  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:04.401218  961068 retry.go:31] will retry after 423.388529ms: waiting for machine to come up
	I0314 18:19:04.825914  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:04.826470  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:04.826506  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:04.826407  961068 retry.go:31] will retry after 607.405727ms: waiting for machine to come up
	I0314 18:19:05.435370  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:05.435814  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:05.435837  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:05.435757  961068 retry.go:31] will retry after 608.06293ms: waiting for machine to come up
	I0314 18:19:06.045458  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:06.045972  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:06.046022  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:06.045918  961068 retry.go:31] will retry after 766.912118ms: waiting for machine to come up
	I0314 18:19:06.814534  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:06.815178  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:06.815214  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:06.815117  961068 retry.go:31] will retry after 940.207735ms: waiting for machine to come up
	I0314 18:19:07.756605  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:07.757086  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:07.757122  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:07.757024  961068 retry.go:31] will retry after 1.190260571s: waiting for machine to come up
	I0314 18:19:08.949393  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:08.949832  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:08.949857  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:08.949758  961068 retry.go:31] will retry after 1.987190642s: waiting for machine to come up
	I0314 18:19:10.939878  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:10.940509  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:10.940540  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:10.940440  961068 retry.go:31] will retry after 2.423045223s: waiting for machine to come up
	I0314 18:19:13.365954  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:13.366461  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:13.366495  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:13.366407  961068 retry.go:31] will retry after 3.422669414s: waiting for machine to come up
	I0314 18:19:16.790984  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:16.791433  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:16.791482  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:16.791385  961068 retry.go:31] will retry after 2.787821186s: waiting for machine to come up
	I0314 18:19:19.582366  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:19.582857  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:19.582881  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:19.582807  961068 retry.go:31] will retry after 3.642963538s: waiting for machine to come up
	I0314 18:19:23.228018  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.228396  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has current primary IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.228421  960722 main.go:141] libmachine: (ha-105786-m02) Found IP for machine: 192.168.39.245
	I0314 18:19:23.228432  960722 main.go:141] libmachine: (ha-105786-m02) Reserving static IP address...
	I0314 18:19:23.228854  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find host DHCP lease matching {name: "ha-105786-m02", mac: "52:54:00:c9:c4:3c", ip: "192.168.39.245"} in network mk-ha-105786
	I0314 18:19:23.304825  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Getting to WaitForSSH function...
	I0314 18:19:23.304858  960722 main.go:141] libmachine: (ha-105786-m02) Reserved static IP address: 192.168.39.245
	I0314 18:19:23.304877  960722 main.go:141] libmachine: (ha-105786-m02) Waiting for SSH to be available...
	I0314 18:19:23.307770  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.308270  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.308299  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.308468  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Using SSH client type: external
	I0314 18:19:23.308494  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa (-rw-------)
	I0314 18:19:23.308539  960722 main.go:141] libmachine: (ha-105786-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 18:19:23.308558  960722 main.go:141] libmachine: (ha-105786-m02) DBG | About to run SSH command:
	I0314 18:19:23.308575  960722 main.go:141] libmachine: (ha-105786-m02) DBG | exit 0
	I0314 18:19:23.436637  960722 main.go:141] libmachine: (ha-105786-m02) DBG | SSH cmd err, output: <nil>: 
	I0314 18:19:23.436883  960722 main.go:141] libmachine: (ha-105786-m02) KVM machine creation complete!
	I0314 18:19:23.437267  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetConfigRaw
	I0314 18:19:23.437807  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:23.438017  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:23.438200  960722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 18:19:23.438213  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:19:23.439525  960722 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 18:19:23.439543  960722 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 18:19:23.439551  960722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 18:19:23.439559  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:23.442000  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.442286  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.442308  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.442472  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:23.442670  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.442845  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.442975  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:23.443136  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:23.443381  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:23.443397  960722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 18:19:23.555846  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:19:23.555877  960722 main.go:141] libmachine: Detecting the provisioner...
	I0314 18:19:23.555887  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:23.558575  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.558998  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.559039  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.559164  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:23.559397  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.559569  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.559738  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:23.559956  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:23.560131  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:23.560142  960722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 18:19:23.673387  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 18:19:23.673451  960722 main.go:141] libmachine: found compatible host: buildroot
	I0314 18:19:23.673458  960722 main.go:141] libmachine: Provisioning with buildroot...
	I0314 18:19:23.673466  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetMachineName
	I0314 18:19:23.673775  960722 buildroot.go:166] provisioning hostname "ha-105786-m02"
	I0314 18:19:23.673806  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetMachineName
	I0314 18:19:23.673992  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:23.676742  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.677073  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.677092  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.677223  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:23.677409  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.677566  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.677712  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:23.677878  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:23.678069  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:23.678086  960722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-105786-m02 && echo "ha-105786-m02" | sudo tee /etc/hostname
	I0314 18:19:23.809827  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786-m02
	
	I0314 18:19:23.809878  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:23.812967  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.813384  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.813410  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.813621  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:23.813812  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.813987  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.814117  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:23.814299  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:23.814509  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:23.814527  960722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-105786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-105786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-105786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:19:23.939089  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:19:23.939121  960722 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:19:23.939150  960722 buildroot.go:174] setting up certificates
	I0314 18:19:23.939170  960722 provision.go:84] configureAuth start
	I0314 18:19:23.939182  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetMachineName
	I0314 18:19:23.939507  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:19:23.942279  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.942667  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.942695  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.942867  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:23.945157  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.945498  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.945527  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.945633  960722 provision.go:143] copyHostCerts
	I0314 18:19:23.945685  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:19:23.945717  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 18:19:23.945727  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:19:23.945790  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:19:23.945910  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:19:23.945929  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 18:19:23.945943  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:19:23.945970  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:19:23.946027  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:19:23.946050  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 18:19:23.946057  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:19:23.946078  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:19:23.946155  960722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.ha-105786-m02 san=[127.0.0.1 192.168.39.245 ha-105786-m02 localhost minikube]
	I0314 18:19:24.180161  960722 provision.go:177] copyRemoteCerts
	I0314 18:19:24.180240  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:19:24.180269  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:24.182870  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.183182  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.183229  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.183344  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.183531  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.183692  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.183883  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	I0314 18:19:24.271554  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 18:19:24.271638  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:19:24.298761  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 18:19:24.298833  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 18:19:24.327009  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 18:19:24.327078  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 18:19:24.353565  960722 provision.go:87] duration metric: took 414.379263ms to configureAuth
	I0314 18:19:24.353593  960722 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:19:24.353751  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:19:24.353825  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:24.356549  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.356928  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.356952  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.357115  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.357324  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.357494  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.357675  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.357863  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:24.358075  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:24.358099  960722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:19:24.640322  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:19:24.640368  960722 main.go:141] libmachine: Checking connection to Docker...
	I0314 18:19:24.640381  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetURL
	I0314 18:19:24.641774  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Using libvirt version 6000000
	I0314 18:19:24.643922  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.644313  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.644347  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.644512  960722 main.go:141] libmachine: Docker is up and running!
	I0314 18:19:24.644529  960722 main.go:141] libmachine: Reticulating splines...
	I0314 18:19:24.644537  960722 client.go:171] duration metric: took 22.913684551s to LocalClient.Create
	I0314 18:19:24.644561  960722 start.go:167] duration metric: took 22.913762805s to libmachine.API.Create "ha-105786"
	I0314 18:19:24.644572  960722 start.go:293] postStartSetup for "ha-105786-m02" (driver="kvm2")
	I0314 18:19:24.644591  960722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:19:24.644625  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:24.644893  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:19:24.644924  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:24.646994  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.647308  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.647340  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.647478  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.647656  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.647816  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.647952  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	I0314 18:19:24.735859  960722 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:19:24.740830  960722 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:19:24.740857  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:19:24.740920  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:19:24.740990  960722 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 18:19:24.741001  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /etc/ssl/certs/9513112.pem
	I0314 18:19:24.741084  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:19:24.751988  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:19:24.779482  960722 start.go:296] duration metric: took 134.895535ms for postStartSetup
	I0314 18:19:24.779534  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetConfigRaw
	I0314 18:19:24.780095  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:19:24.782938  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.783265  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.783293  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.783592  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:19:24.783778  960722 start.go:128] duration metric: took 23.072599138s to createHost
	I0314 18:19:24.783814  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:24.785917  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.786226  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.786257  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.786409  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.786560  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.786722  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.786854  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.787016  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:24.787204  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:24.787217  960722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:19:24.901400  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440364.867809451
	
	I0314 18:19:24.901435  960722 fix.go:216] guest clock: 1710440364.867809451
	I0314 18:19:24.901446  960722 fix.go:229] Guest: 2024-03-14 18:19:24.867809451 +0000 UTC Remote: 2024-03-14 18:19:24.783790213 +0000 UTC m=+81.937635217 (delta=84.019238ms)
	I0314 18:19:24.901474  960722 fix.go:200] guest clock delta is within tolerance: 84.019238ms
	I0314 18:19:24.901482  960722 start.go:83] releasing machines lock for "ha-105786-m02", held for 23.190429572s
	I0314 18:19:24.901514  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:24.901832  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:19:24.904756  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.905107  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.905148  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.907291  960722 out.go:177] * Found network options:
	I0314 18:19:24.908683  960722 out.go:177]   - NO_PROXY=192.168.39.170
	W0314 18:19:24.909868  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:19:24.909900  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:24.910417  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:24.910606  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:24.910718  960722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:19:24.910764  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	W0314 18:19:24.910798  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:19:24.910887  960722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:19:24.910913  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:24.913501  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.913745  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.913905  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.913933  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.914088  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.914208  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.914236  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.914249  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.914423  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.914441  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.914601  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.914597  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	I0314 18:19:24.914776  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.914935  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	I0314 18:19:25.161719  960722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:19:25.171333  960722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:19:25.171417  960722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:19:25.189248  960722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:19:25.189276  960722 start.go:494] detecting cgroup driver to use...
	I0314 18:19:25.189355  960722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:19:25.206412  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:19:25.221756  960722 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:19:25.221820  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:19:25.237185  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:19:25.252085  960722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:19:25.377806  960722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:19:25.526121  960722 docker.go:233] disabling docker service ...
	I0314 18:19:25.526190  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:19:25.542877  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:19:25.557697  960722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:19:25.715305  960722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:19:25.852627  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:19:25.871050  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:19:25.894623  960722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:19:25.894705  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:19:25.906913  960722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:19:25.906993  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:19:25.918809  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:19:25.930511  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:19:25.941955  960722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:19:25.953466  960722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:19:25.964052  960722 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 18:19:25.964112  960722 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 18:19:25.979587  960722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:19:25.990878  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:19:26.127427  960722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:19:26.283859  960722 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:19:26.283950  960722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:19:26.290144  960722 start.go:562] Will wait 60s for crictl version
	I0314 18:19:26.290219  960722 ssh_runner.go:195] Run: which crictl
	I0314 18:19:26.294233  960722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:19:26.335253  960722 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:19:26.335375  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:19:26.365918  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:19:26.399964  960722 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:19:26.401385  960722 out.go:177]   - env NO_PROXY=192.168.39.170
	I0314 18:19:26.402665  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:19:26.405642  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:26.406041  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:26.406083  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:26.406334  960722 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:19:26.410963  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:19:26.425182  960722 mustload.go:65] Loading cluster: ha-105786
	I0314 18:19:26.425388  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:19:26.425664  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:26.425697  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:26.440672  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0314 18:19:26.441113  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:26.441607  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:26.441631  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:26.441961  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:26.442159  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:19:26.443603  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:19:26.443940  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:26.443983  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:26.458381  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43285
	I0314 18:19:26.458760  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:26.459187  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:26.459205  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:26.459515  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:26.459689  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:19:26.459844  960722 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786 for IP: 192.168.39.245
	I0314 18:19:26.459856  960722 certs.go:194] generating shared ca certs ...
	I0314 18:19:26.459874  960722 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:26.460023  960722 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:19:26.460073  960722 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:19:26.460086  960722 certs.go:256] generating profile certs ...
	I0314 18:19:26.460177  960722 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key
	I0314 18:19:26.460252  960722 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.47b0118b
	I0314 18:19:26.460278  960722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.47b0118b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170 192.168.39.245 192.168.39.254]
	I0314 18:19:26.627303  960722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.47b0118b ...
	I0314 18:19:26.627340  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.47b0118b: {Name:mka43f08d6f2befad5f191afd79378e4364c7b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:26.627543  960722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.47b0118b ...
	I0314 18:19:26.627561  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.47b0118b: {Name:mk69889debaf40240194e9108e35810aec9c2fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:26.627660  960722 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.47b0118b -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt
	I0314 18:19:26.627937  960722 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.47b0118b -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key
	I0314 18:19:26.628125  960722 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key
	I0314 18:19:26.628149  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:19:26.628166  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:19:26.628184  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:19:26.628200  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:19:26.628238  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:19:26.628254  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:19:26.628268  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:19:26.628285  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:19:26.628351  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 18:19:26.628388  960722 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 18:19:26.628402  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:19:26.628443  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:19:26.628472  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:19:26.628505  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:19:26.628560  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:19:26.628597  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem -> /usr/share/ca-certificates/951311.pem
	I0314 18:19:26.628618  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /usr/share/ca-certificates/9513112.pem
	I0314 18:19:26.628638  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:26.628688  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:19:26.631676  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:26.632050  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:19:26.632074  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:26.632260  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:19:26.632483  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:19:26.632600  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:19:26.632702  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:19:26.704629  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0314 18:19:26.710520  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0314 18:19:26.723946  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0314 18:19:26.729482  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0314 18:19:26.742181  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0314 18:19:26.746858  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0314 18:19:26.759398  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0314 18:19:26.763892  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0314 18:19:26.776134  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0314 18:19:26.781079  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0314 18:19:26.793161  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0314 18:19:26.797438  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0314 18:19:26.809994  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:19:26.839682  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:19:26.867721  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:19:26.895815  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:19:26.923511  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0314 18:19:26.951247  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 18:19:26.977027  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:19:27.003274  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 18:19:27.029436  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 18:19:27.056352  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 18:19:27.081633  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:19:27.108286  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0314 18:19:27.126900  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0314 18:19:27.147015  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0314 18:19:27.166330  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0314 18:19:27.185637  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0314 18:19:27.205067  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0314 18:19:27.224522  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0314 18:19:27.243401  960722 ssh_runner.go:195] Run: openssl version
	I0314 18:19:27.249815  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:19:27.262629  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:27.267430  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:27.267491  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:27.273514  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:19:27.286162  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 18:19:27.298719  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 18:19:27.303524  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:19:27.303586  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 18:19:27.309729  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 18:19:27.322726  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 18:19:27.335678  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 18:19:27.340721  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:19:27.340768  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 18:19:27.346602  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:19:27.359268  960722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:19:27.363787  960722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:19:27.363852  960722 kubeadm.go:928] updating node {m02 192.168.39.245 8443 v1.28.4 crio true true} ...
	I0314 18:19:27.363940  960722 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-105786-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:19:27.363962  960722 kube-vip.go:105] generating kube-vip config ...
	I0314 18:19:27.363993  960722 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:19:27.364028  960722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:19:27.375169  960722 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0314 18:19:27.375212  960722 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0314 18:19:27.386518  960722 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0314 18:19:27.386547  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:19:27.386617  960722 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0314 18:19:27.386631  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:19:27.386641  960722 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0314 18:19:27.391262  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 18:19:27.391292  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0314 18:19:27.930878  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:19:27.930982  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:19:27.936615  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 18:19:27.936649  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0314 18:19:28.437626  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:19:28.456388  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:19:28.456476  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:19:28.461147  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 18:19:28.461178  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0314 18:19:28.969853  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0314 18:19:28.980260  960722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0314 18:19:28.998795  960722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:19:29.017462  960722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0314 18:19:29.035504  960722 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:19:29.039708  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:19:29.052609  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:19:29.182648  960722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:19:29.201749  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:19:29.202236  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:29.202279  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:29.218305  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34299
	I0314 18:19:29.218754  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:29.219288  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:29.219322  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:29.219723  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:29.219947  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:19:29.220109  960722 start.go:316] joinCluster: &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:19:29.220250  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 18:19:29.220274  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:19:29.223156  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:29.223669  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:19:29.223705  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:29.223819  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:19:29.224027  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:19:29.224181  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:19:29.224345  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:19:29.387394  960722 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:19:29.387458  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j8sw71.0n1hswrh9trtagi7 --discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-105786-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443"
	I0314 18:20:03.793170  960722 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j8sw71.0n1hswrh9trtagi7 --discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-105786-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443": (34.40567307s)
	I0314 18:20:03.793230  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 18:20:04.346563  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-105786-m02 minikube.k8s.io/updated_at=2024_03_14T18_20_04_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-105786 minikube.k8s.io/primary=false
	I0314 18:20:04.475492  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-105786-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0314 18:20:04.605925  960722 start.go:318] duration metric: took 35.385808199s to joinCluster
	I0314 18:20:04.606049  960722 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:20:04.607661  960722 out.go:177] * Verifying Kubernetes components...
	I0314 18:20:04.606377  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:20:04.609186  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:20:04.843807  960722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:20:04.885426  960722 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:20:04.885833  960722 kapi.go:59] client config for ha-105786: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key", CAFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0314 18:20:04.885937  960722 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.170:8443
	I0314 18:20:04.886257  960722 node_ready.go:35] waiting up to 6m0s for node "ha-105786-m02" to be "Ready" ...
	I0314 18:20:04.886405  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:04.886417  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:04.886433  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:04.886441  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:04.899478  960722 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0314 18:20:05.387461  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:05.387484  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:05.387492  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:05.387498  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:05.392086  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:05.887327  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:05.887358  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:05.887372  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:05.887377  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:05.915477  960722 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0314 18:20:06.386528  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:06.386564  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:06.386576  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:06.386581  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:06.391096  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:06.887108  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:06.887131  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:06.887142  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:06.887148  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:06.892327  960722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:20:06.893902  960722 node_ready.go:53] node "ha-105786-m02" has status "Ready":"False"
	I0314 18:20:07.386686  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:07.386711  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:07.386721  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:07.386724  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:07.394968  960722 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:20:07.887419  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:07.887445  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:07.887453  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:07.887457  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:07.892320  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:08.387227  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:08.387251  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:08.387260  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:08.387264  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:08.390902  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:08.886738  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:08.886765  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:08.886793  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:08.886797  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:08.890641  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:09.386590  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:09.386620  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:09.386631  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:09.386638  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:09.390267  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:09.390989  960722 node_ready.go:53] node "ha-105786-m02" has status "Ready":"False"
	I0314 18:20:09.886947  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:09.886973  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:09.886983  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:09.886988  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:09.890744  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.386910  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:10.386939  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.386949  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.386955  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.390382  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.391412  960722 node_ready.go:49] node "ha-105786-m02" has status "Ready":"True"
	I0314 18:20:10.391443  960722 node_ready.go:38] duration metric: took 5.505132422s for node "ha-105786-m02" to be "Ready" ...
	I0314 18:20:10.391458  960722 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:20:10.391558  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:20:10.391573  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.391583  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.391589  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.405003  960722 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0314 18:20:10.411010  960722 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.411083  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-cx8rc
	I0314 18:20:10.411092  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.411099  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.411104  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.415483  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:10.416287  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:10.416316  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.416327  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.416332  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.419105  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:10.419998  960722 pod_ready.go:92] pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:10.420035  960722 pod_ready.go:81] duration metric: took 8.983237ms for pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.420044  960722 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.420089  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jsddl
	I0314 18:20:10.420098  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.420105  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.420110  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.423501  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.424705  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:10.424734  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.424745  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.424751  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.427816  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.428917  960722 pod_ready.go:92] pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:10.428932  960722 pod_ready.go:81] duration metric: took 8.882551ms for pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.428941  960722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.428994  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786
	I0314 18:20:10.429003  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.429010  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.429013  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.431546  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:10.432130  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:10.432143  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.432150  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.432153  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.435304  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.435905  960722 pod_ready.go:92] pod "etcd-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:10.435920  960722 pod_ready.go:81] duration metric: took 6.973841ms for pod "etcd-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.435929  960722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.435970  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:10.435979  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.435985  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.435990  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.438803  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:10.439389  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:10.439408  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.439418  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.439425  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.442360  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:10.936677  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:10.936706  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.936715  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.936719  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.940339  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.941210  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:10.941230  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.941237  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.941239  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.944113  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:11.437016  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:11.437038  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:11.437047  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:11.437050  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:11.440773  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:11.441314  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:11.441332  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:11.441338  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:11.441343  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:11.444263  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:11.936616  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:11.936637  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:11.936645  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:11.936650  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:11.940365  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:11.941270  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:11.941287  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:11.941294  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:11.941297  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:11.944311  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:12.436394  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:12.436416  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.436423  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.436428  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.440606  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:12.441378  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:12.441402  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.441413  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.441420  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.444611  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:12.445194  960722 pod_ready.go:102] pod "etcd-ha-105786-m02" in "kube-system" namespace has status "Ready":"False"
	I0314 18:20:12.937037  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:12.937061  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.937068  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.937072  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.941129  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:12.941848  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:12.941864  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.941871  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.941874  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.945312  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:12.945977  960722 pod_ready.go:92] pod "etcd-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:12.945997  960722 pod_ready.go:81] duration metric: took 2.510061796s for pod "etcd-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:12.946010  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:12.946058  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786
	I0314 18:20:12.946068  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.946075  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.946080  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.949209  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:12.949841  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:12.949858  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.949866  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.949870  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.952233  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:12.952748  960722 pod_ready.go:92] pod "kube-apiserver-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:12.952764  960722 pod_ready.go:81] duration metric: took 6.746234ms for pod "kube-apiserver-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:12.952772  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:12.987042  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786
	I0314 18:20:12.987057  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.987064  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.987069  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.989880  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:13.186957  960722 request.go:629] Waited for 196.361998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:13.187046  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:13.187054  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:13.187060  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:13.187065  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:13.190577  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:13.191335  960722 pod_ready.go:92] pod "kube-controller-manager-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:13.191357  960722 pod_ready.go:81] duration metric: took 238.577402ms for pod "kube-controller-manager-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:13.191367  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:13.387841  960722 request.go:629] Waited for 196.392155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786-m02
	I0314 18:20:13.387924  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786-m02
	I0314 18:20:13.387932  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:13.387940  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:13.387947  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:13.391798  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:13.588005  960722 request.go:629] Waited for 195.407854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:13.588069  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:13.588076  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:13.588111  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:13.588123  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:13.591735  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:13.592845  960722 pod_ready.go:92] pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:13.592867  960722 pod_ready.go:81] duration metric: took 401.493419ms for pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:13.592876  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hd8mx" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:13.786933  960722 request.go:629] Waited for 193.970665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd8mx
	I0314 18:20:13.787029  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd8mx
	I0314 18:20:13.787040  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:13.787053  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:13.787062  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:13.790577  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:13.986935  960722 request.go:629] Waited for 195.359941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:13.987046  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:13.987080  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:13.987093  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:13.987097  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:13.991218  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:13.991858  960722 pod_ready.go:92] pod "kube-proxy-hd8mx" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:13.991880  960722 pod_ready.go:81] duration metric: took 398.997636ms for pod "kube-proxy-hd8mx" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:13.991890  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpz89" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:14.186927  960722 request.go:629] Waited for 194.956029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpz89
	I0314 18:20:14.187037  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpz89
	I0314 18:20:14.187046  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:14.187095  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:14.187108  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:14.191255  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:14.387256  960722 request.go:629] Waited for 195.280121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:14.387315  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:14.387320  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:14.387330  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:14.387334  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:14.390941  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:14.391600  960722 pod_ready.go:92] pod "kube-proxy-qpz89" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:14.391617  960722 pod_ready.go:81] duration metric: took 399.721436ms for pod "kube-proxy-qpz89" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:14.391631  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:14.587729  960722 request.go:629] Waited for 196.012378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786
	I0314 18:20:14.587807  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786
	I0314 18:20:14.587812  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:14.587819  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:14.587823  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:14.592229  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:14.787292  960722 request.go:629] Waited for 194.393534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:14.787359  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:14.787364  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:14.787371  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:14.787377  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:14.790943  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:14.791580  960722 pod_ready.go:92] pod "kube-scheduler-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:14.791602  960722 pod_ready.go:81] duration metric: took 399.959535ms for pod "kube-scheduler-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:14.791612  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:14.987819  960722 request.go:629] Waited for 196.101347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m02
	I0314 18:20:14.987897  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m02
	I0314 18:20:14.987906  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:14.987914  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:14.987921  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:14.991143  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:15.187054  960722 request.go:629] Waited for 195.314505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:15.187145  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:15.187152  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.187162  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.187168  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.192407  960722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:20:15.193184  960722 pod_ready.go:92] pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:15.193217  960722 pod_ready.go:81] duration metric: took 401.59897ms for pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:15.193232  960722 pod_ready.go:38] duration metric: took 4.801751417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:20:15.193251  960722 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:20:15.193356  960722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:20:15.211663  960722 api_server.go:72] duration metric: took 10.605552869s to wait for apiserver process to appear ...
	I0314 18:20:15.211689  960722 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:20:15.211719  960722 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0314 18:20:15.216515  960722 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I0314 18:20:15.216590  960722 round_trippers.go:463] GET https://192.168.39.170:8443/version
	I0314 18:20:15.216601  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.216611  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.216621  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.217612  960722 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0314 18:20:15.217737  960722 api_server.go:141] control plane version: v1.28.4
	I0314 18:20:15.217758  960722 api_server.go:131] duration metric: took 6.053816ms to wait for apiserver health ...
	I0314 18:20:15.217769  960722 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:20:15.387184  960722 request.go:629] Waited for 169.33673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:20:15.387268  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:20:15.387276  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.387284  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.387290  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.392364  960722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:20:15.400517  960722 system_pods.go:59] 16 kube-system pods found
	I0314 18:20:15.400556  960722 system_pods.go:61] "coredns-5dd5756b68-cx8rc" [d2e960de-67a9-4385-ba02-78a744602bcc] Running
	I0314 18:20:15.400564  960722 system_pods.go:61] "coredns-5dd5756b68-jsddl" [bdbdea16-97b0-4581-8bab-9a472af11004] Running
	I0314 18:20:15.400569  960722 system_pods.go:61] "etcd-ha-105786" [a11d4142-2853-45c8-9433-c4a5d54fc66c] Running
	I0314 18:20:15.400575  960722 system_pods.go:61] "etcd-ha-105786-m02" [5bce83df-04d4-476b-8869-746c5900563e] Running
	I0314 18:20:15.400579  960722 system_pods.go:61] "kindnet-9b2pr" [e23e9c49-0b7d-46ca-ae62-11e9b26a1280] Running
	I0314 18:20:15.400583  960722 system_pods.go:61] "kindnet-vpgvl" [fcd2b2f2-848f-408e-8d28-fb54cf623210] Running
	I0314 18:20:15.400588  960722 system_pods.go:61] "kube-apiserver-ha-105786" [f8058168-3736-4279-9fa7-f4878d8361e1] Running
	I0314 18:20:15.400594  960722 system_pods.go:61] "kube-controller-manager-ha-105786" [0a5ec36c-be15-4649-a8f8-1dd9c5a0b87b] Running
	I0314 18:20:15.400603  960722 system_pods.go:61] "kube-controller-manager-ha-105786-m02" [d6fe9aea-613c-4bd8-8cb4-e3ac732feaa9] Running
	I0314 18:20:15.400608  960722 system_pods.go:61] "kube-proxy-hd8mx" [3e003f67-93dd-4105-a7bd-68d9af563ea4] Running
	I0314 18:20:15.400614  960722 system_pods.go:61] "kube-proxy-qpz89" [ca6a156c-9589-4200-bcfe-1537251ac9e2] Running
	I0314 18:20:15.400622  960722 system_pods.go:61] "kube-scheduler-ha-105786" [032b8718-b475-4155-b83b-1d065123f53f] Running
	I0314 18:20:15.400627  960722 system_pods.go:61] "kube-scheduler-ha-105786-m02" [f397b74c-e61a-498b-807b-002474ce63b2] Running
	I0314 18:20:15.400639  960722 system_pods.go:61] "kube-vip-ha-105786" [b310c7b3-e9d7-4f98-8df8-fdfb9f7754f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:20:15.400650  960722 system_pods.go:61] "kube-vip-ha-105786-m02" [5caee046-92ea-4315-b240-84ce4553d64e] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:20:15.400660  960722 system_pods.go:61] "storage-provisioner" [566fc43f-5610-4dcd-b683-1cc87e6ed609] Running
	I0314 18:20:15.400669  960722 system_pods.go:74] duration metric: took 182.888268ms to wait for pod list to return data ...
	I0314 18:20:15.400681  960722 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:20:15.587068  960722 request.go:629] Waited for 186.276852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:20:15.587153  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:20:15.587163  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.587173  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.587181  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.591424  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:15.591776  960722 default_sa.go:45] found service account: "default"
	I0314 18:20:15.591804  960722 default_sa.go:55] duration metric: took 191.114412ms for default service account to be created ...
	I0314 18:20:15.591816  960722 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:20:15.787310  960722 request.go:629] Waited for 195.392339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:20:15.787397  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:20:15.787404  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.787416  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.787423  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.796268  960722 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:20:15.801328  960722 system_pods.go:86] 17 kube-system pods found
	I0314 18:20:15.801360  960722 system_pods.go:89] "coredns-5dd5756b68-cx8rc" [d2e960de-67a9-4385-ba02-78a744602bcc] Running
	I0314 18:20:15.801369  960722 system_pods.go:89] "coredns-5dd5756b68-jsddl" [bdbdea16-97b0-4581-8bab-9a472af11004] Running
	I0314 18:20:15.801375  960722 system_pods.go:89] "etcd-ha-105786" [a11d4142-2853-45c8-9433-c4a5d54fc66c] Running
	I0314 18:20:15.801380  960722 system_pods.go:89] "etcd-ha-105786-m02" [5bce83df-04d4-476b-8869-746c5900563e] Running
	I0314 18:20:15.801386  960722 system_pods.go:89] "kindnet-9b2pr" [e23e9c49-0b7d-46ca-ae62-11e9b26a1280] Running
	I0314 18:20:15.801392  960722 system_pods.go:89] "kindnet-vpgvl" [fcd2b2f2-848f-408e-8d28-fb54cf623210] Running
	I0314 18:20:15.801398  960722 system_pods.go:89] "kube-apiserver-ha-105786" [f8058168-3736-4279-9fa7-f4878d8361e1] Running
	I0314 18:20:15.801403  960722 system_pods.go:89] "kube-apiserver-ha-105786-m02" [5f8798ed-58cc-45c8-a83c-217d99d40769] Pending
	I0314 18:20:15.801409  960722 system_pods.go:89] "kube-controller-manager-ha-105786" [0a5ec36c-be15-4649-a8f8-1dd9c5a0b87b] Running
	I0314 18:20:15.801419  960722 system_pods.go:89] "kube-controller-manager-ha-105786-m02" [d6fe9aea-613c-4bd8-8cb4-e3ac732feaa9] Running
	I0314 18:20:15.801428  960722 system_pods.go:89] "kube-proxy-hd8mx" [3e003f67-93dd-4105-a7bd-68d9af563ea4] Running
	I0314 18:20:15.801437  960722 system_pods.go:89] "kube-proxy-qpz89" [ca6a156c-9589-4200-bcfe-1537251ac9e2] Running
	I0314 18:20:15.801445  960722 system_pods.go:89] "kube-scheduler-ha-105786" [032b8718-b475-4155-b83b-1d065123f53f] Running
	I0314 18:20:15.801454  960722 system_pods.go:89] "kube-scheduler-ha-105786-m02" [f397b74c-e61a-498b-807b-002474ce63b2] Running
	I0314 18:20:15.801469  960722 system_pods.go:89] "kube-vip-ha-105786" [b310c7b3-e9d7-4f98-8df8-fdfb9f7754f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:20:15.801483  960722 system_pods.go:89] "kube-vip-ha-105786-m02" [5caee046-92ea-4315-b240-84ce4553d64e] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:20:15.801494  960722 system_pods.go:89] "storage-provisioner" [566fc43f-5610-4dcd-b683-1cc87e6ed609] Running
	I0314 18:20:15.801506  960722 system_pods.go:126] duration metric: took 209.682732ms to wait for k8s-apps to be running ...
	I0314 18:20:15.801517  960722 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:20:15.801583  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:20:15.824716  960722 system_svc.go:56] duration metric: took 23.189692ms WaitForService to wait for kubelet
	I0314 18:20:15.824758  960722 kubeadm.go:576] duration metric: took 11.218651856s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:20:15.824785  960722 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:20:15.987199  960722 request.go:629] Waited for 162.322824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes
	I0314 18:20:15.987266  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes
	I0314 18:20:15.987271  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.987279  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.987284  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.991330  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:15.992311  960722 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:20:15.992338  960722 node_conditions.go:123] node cpu capacity is 2
	I0314 18:20:15.992352  960722 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:20:15.992356  960722 node_conditions.go:123] node cpu capacity is 2
	I0314 18:20:15.992360  960722 node_conditions.go:105] duration metric: took 167.569722ms to run NodePressure ...
	I0314 18:20:15.992376  960722 start.go:240] waiting for startup goroutines ...
	I0314 18:20:15.992416  960722 start.go:254] writing updated cluster config ...
	I0314 18:20:15.994795  960722 out.go:177] 
	I0314 18:20:15.996915  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:20:15.997013  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:20:15.999109  960722 out.go:177] * Starting "ha-105786-m03" control-plane node in "ha-105786" cluster
	I0314 18:20:16.000469  960722 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:20:16.000499  960722 cache.go:56] Caching tarball of preloaded images
	I0314 18:20:16.000623  960722 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:20:16.000640  960722 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:20:16.000744  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:20:16.000982  960722 start.go:360] acquireMachinesLock for ha-105786-m03: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:20:16.001040  960722 start.go:364] duration metric: took 34.277µs to acquireMachinesLock for "ha-105786-m03"
	I0314 18:20:16.001062  960722 start.go:93] Provisioning new machine with config: &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:20:16.001168  960722 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0314 18:20:16.002892  960722 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:20:16.002985  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:20:16.003028  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:20:16.018758  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I0314 18:20:16.019190  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:20:16.019709  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:20:16.019730  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:20:16.020073  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:20:16.020330  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetMachineName
	I0314 18:20:16.020505  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:16.020687  960722 start.go:159] libmachine.API.Create for "ha-105786" (driver="kvm2")
	I0314 18:20:16.020720  960722 client.go:168] LocalClient.Create starting
	I0314 18:20:16.020748  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 18:20:16.020782  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:20:16.020798  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:20:16.020855  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 18:20:16.020874  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:20:16.020885  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:20:16.020901  960722 main.go:141] libmachine: Running pre-create checks...
	I0314 18:20:16.020909  960722 main.go:141] libmachine: (ha-105786-m03) Calling .PreCreateCheck
	I0314 18:20:16.021098  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetConfigRaw
	I0314 18:20:16.021494  960722 main.go:141] libmachine: Creating machine...
	I0314 18:20:16.021508  960722 main.go:141] libmachine: (ha-105786-m03) Calling .Create
	I0314 18:20:16.021673  960722 main.go:141] libmachine: (ha-105786-m03) Creating KVM machine...
	I0314 18:20:16.022984  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found existing default KVM network
	I0314 18:20:16.023177  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found existing private KVM network mk-ha-105786
	I0314 18:20:16.023318  960722 main.go:141] libmachine: (ha-105786-m03) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03 ...
	I0314 18:20:16.023344  960722 main.go:141] libmachine: (ha-105786-m03) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 18:20:16.023460  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:16.023300  961384 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:20:16.023554  960722 main.go:141] libmachine: (ha-105786-m03) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:20:16.271798  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:16.271649  961384 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa...
	I0314 18:20:16.379260  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:16.379112  961384 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/ha-105786-m03.rawdisk...
	I0314 18:20:16.379289  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Writing magic tar header
	I0314 18:20:16.379305  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Writing SSH key tar header
	I0314 18:20:16.379313  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:16.379258  961384 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03 ...
	I0314 18:20:16.379384  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03
	I0314 18:20:16.379457  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03 (perms=drwx------)
	I0314 18:20:16.379484  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 18:20:16.379499  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 18:20:16.379520  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:20:16.379534  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 18:20:16.379553  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 18:20:16.379570  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins
	I0314 18:20:16.379585  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 18:20:16.379605  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 18:20:16.379619  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 18:20:16.379633  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 18:20:16.379644  960722 main.go:141] libmachine: (ha-105786-m03) Creating domain...
	I0314 18:20:16.379655  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home
	I0314 18:20:16.379671  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Skipping /home - not owner
	I0314 18:20:16.380813  960722 main.go:141] libmachine: (ha-105786-m03) define libvirt domain using xml: 
	I0314 18:20:16.380837  960722 main.go:141] libmachine: (ha-105786-m03) <domain type='kvm'>
	I0314 18:20:16.380846  960722 main.go:141] libmachine: (ha-105786-m03)   <name>ha-105786-m03</name>
	I0314 18:20:16.380851  960722 main.go:141] libmachine: (ha-105786-m03)   <memory unit='MiB'>2200</memory>
	I0314 18:20:16.380857  960722 main.go:141] libmachine: (ha-105786-m03)   <vcpu>2</vcpu>
	I0314 18:20:16.380866  960722 main.go:141] libmachine: (ha-105786-m03)   <features>
	I0314 18:20:16.380873  960722 main.go:141] libmachine: (ha-105786-m03)     <acpi/>
	I0314 18:20:16.380877  960722 main.go:141] libmachine: (ha-105786-m03)     <apic/>
	I0314 18:20:16.380886  960722 main.go:141] libmachine: (ha-105786-m03)     <pae/>
	I0314 18:20:16.380893  960722 main.go:141] libmachine: (ha-105786-m03)     
	I0314 18:20:16.380911  960722 main.go:141] libmachine: (ha-105786-m03)   </features>
	I0314 18:20:16.380929  960722 main.go:141] libmachine: (ha-105786-m03)   <cpu mode='host-passthrough'>
	I0314 18:20:16.380943  960722 main.go:141] libmachine: (ha-105786-m03)   
	I0314 18:20:16.380949  960722 main.go:141] libmachine: (ha-105786-m03)   </cpu>
	I0314 18:20:16.380955  960722 main.go:141] libmachine: (ha-105786-m03)   <os>
	I0314 18:20:16.380973  960722 main.go:141] libmachine: (ha-105786-m03)     <type>hvm</type>
	I0314 18:20:16.380983  960722 main.go:141] libmachine: (ha-105786-m03)     <boot dev='cdrom'/>
	I0314 18:20:16.380993  960722 main.go:141] libmachine: (ha-105786-m03)     <boot dev='hd'/>
	I0314 18:20:16.381020  960722 main.go:141] libmachine: (ha-105786-m03)     <bootmenu enable='no'/>
	I0314 18:20:16.381035  960722 main.go:141] libmachine: (ha-105786-m03)   </os>
	I0314 18:20:16.381081  960722 main.go:141] libmachine: (ha-105786-m03)   <devices>
	I0314 18:20:16.381111  960722 main.go:141] libmachine: (ha-105786-m03)     <disk type='file' device='cdrom'>
	I0314 18:20:16.381134  960722 main.go:141] libmachine: (ha-105786-m03)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/boot2docker.iso'/>
	I0314 18:20:16.381147  960722 main.go:141] libmachine: (ha-105786-m03)       <target dev='hdc' bus='scsi'/>
	I0314 18:20:16.381160  960722 main.go:141] libmachine: (ha-105786-m03)       <readonly/>
	I0314 18:20:16.381170  960722 main.go:141] libmachine: (ha-105786-m03)     </disk>
	I0314 18:20:16.381184  960722 main.go:141] libmachine: (ha-105786-m03)     <disk type='file' device='disk'>
	I0314 18:20:16.381197  960722 main.go:141] libmachine: (ha-105786-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 18:20:16.381214  960722 main.go:141] libmachine: (ha-105786-m03)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/ha-105786-m03.rawdisk'/>
	I0314 18:20:16.381225  960722 main.go:141] libmachine: (ha-105786-m03)       <target dev='hda' bus='virtio'/>
	I0314 18:20:16.381234  960722 main.go:141] libmachine: (ha-105786-m03)     </disk>
	I0314 18:20:16.381239  960722 main.go:141] libmachine: (ha-105786-m03)     <interface type='network'>
	I0314 18:20:16.381247  960722 main.go:141] libmachine: (ha-105786-m03)       <source network='mk-ha-105786'/>
	I0314 18:20:16.381252  960722 main.go:141] libmachine: (ha-105786-m03)       <model type='virtio'/>
	I0314 18:20:16.381257  960722 main.go:141] libmachine: (ha-105786-m03)     </interface>
	I0314 18:20:16.381262  960722 main.go:141] libmachine: (ha-105786-m03)     <interface type='network'>
	I0314 18:20:16.381282  960722 main.go:141] libmachine: (ha-105786-m03)       <source network='default'/>
	I0314 18:20:16.381298  960722 main.go:141] libmachine: (ha-105786-m03)       <model type='virtio'/>
	I0314 18:20:16.381312  960722 main.go:141] libmachine: (ha-105786-m03)     </interface>
	I0314 18:20:16.381323  960722 main.go:141] libmachine: (ha-105786-m03)     <serial type='pty'>
	I0314 18:20:16.381337  960722 main.go:141] libmachine: (ha-105786-m03)       <target port='0'/>
	I0314 18:20:16.381350  960722 main.go:141] libmachine: (ha-105786-m03)     </serial>
	I0314 18:20:16.381362  960722 main.go:141] libmachine: (ha-105786-m03)     <console type='pty'>
	I0314 18:20:16.381381  960722 main.go:141] libmachine: (ha-105786-m03)       <target type='serial' port='0'/>
	I0314 18:20:16.381394  960722 main.go:141] libmachine: (ha-105786-m03)     </console>
	I0314 18:20:16.381407  960722 main.go:141] libmachine: (ha-105786-m03)     <rng model='virtio'>
	I0314 18:20:16.381422  960722 main.go:141] libmachine: (ha-105786-m03)       <backend model='random'>/dev/random</backend>
	I0314 18:20:16.381432  960722 main.go:141] libmachine: (ha-105786-m03)     </rng>
	I0314 18:20:16.381442  960722 main.go:141] libmachine: (ha-105786-m03)     
	I0314 18:20:16.381459  960722 main.go:141] libmachine: (ha-105786-m03)     
	I0314 18:20:16.381473  960722 main.go:141] libmachine: (ha-105786-m03)   </devices>
	I0314 18:20:16.381485  960722 main.go:141] libmachine: (ha-105786-m03) </domain>
	I0314 18:20:16.381501  960722 main.go:141] libmachine: (ha-105786-m03) 
	I0314 18:20:16.388782  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:1b:7e:1f in network default
	I0314 18:20:16.389402  960722 main.go:141] libmachine: (ha-105786-m03) Ensuring networks are active...
	I0314 18:20:16.389429  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:16.390465  960722 main.go:141] libmachine: (ha-105786-m03) Ensuring network default is active
	I0314 18:20:16.390750  960722 main.go:141] libmachine: (ha-105786-m03) Ensuring network mk-ha-105786 is active
	I0314 18:20:16.391248  960722 main.go:141] libmachine: (ha-105786-m03) Getting domain xml...
	I0314 18:20:16.391969  960722 main.go:141] libmachine: (ha-105786-m03) Creating domain...
	I0314 18:20:17.589663  960722 main.go:141] libmachine: (ha-105786-m03) Waiting to get IP...
	I0314 18:20:17.590484  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:17.590914  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:17.590984  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:17.590911  961384 retry.go:31] will retry after 263.626588ms: waiting for machine to come up
	I0314 18:20:17.856532  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:17.857087  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:17.857116  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:17.857041  961384 retry.go:31] will retry after 382.637785ms: waiting for machine to come up
	I0314 18:20:18.241581  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:18.242030  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:18.242062  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:18.241981  961384 retry.go:31] will retry after 367.090897ms: waiting for machine to come up
	I0314 18:20:18.610712  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:18.611255  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:18.611281  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:18.611225  961384 retry.go:31] will retry after 600.586652ms: waiting for machine to come up
	I0314 18:20:19.213062  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:19.213584  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:19.213618  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:19.213525  961384 retry.go:31] will retry after 559.92281ms: waiting for machine to come up
	I0314 18:20:19.775309  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:19.775748  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:19.775781  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:19.775691  961384 retry.go:31] will retry after 574.524705ms: waiting for machine to come up
	I0314 18:20:20.351375  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:20.351868  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:20.351893  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:20.351811  961384 retry.go:31] will retry after 972.048987ms: waiting for machine to come up
	I0314 18:20:21.325550  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:21.326031  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:21.326063  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:21.325992  961384 retry.go:31] will retry after 1.371761698s: waiting for machine to come up
	I0314 18:20:22.699573  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:22.700021  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:22.700053  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:22.699967  961384 retry.go:31] will retry after 1.481455468s: waiting for machine to come up
	I0314 18:20:24.183618  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:24.184136  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:24.184168  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:24.184074  961384 retry.go:31] will retry after 1.805133143s: waiting for machine to come up
	I0314 18:20:25.991346  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:25.992156  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:25.992189  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:25.992110  961384 retry.go:31] will retry after 2.770039006s: waiting for machine to come up
	I0314 18:20:28.765632  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:28.766175  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:28.766252  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:28.766166  961384 retry.go:31] will retry after 3.54565346s: waiting for machine to come up
	I0314 18:20:32.313302  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:32.313795  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:32.313823  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:32.313752  961384 retry.go:31] will retry after 2.839983125s: waiting for machine to come up
	I0314 18:20:35.155209  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:35.155526  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:35.155549  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:35.155486  961384 retry.go:31] will retry after 4.973546957s: waiting for machine to come up
	I0314 18:20:40.133497  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.133990  960722 main.go:141] libmachine: (ha-105786-m03) Found IP for machine: 192.168.39.190
	I0314 18:20:40.134029  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has current primary IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.134053  960722 main.go:141] libmachine: (ha-105786-m03) Reserving static IP address...
	I0314 18:20:40.134396  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find host DHCP lease matching {name: "ha-105786-m03", mac: "52:54:00:34:3f:75", ip: "192.168.39.190"} in network mk-ha-105786
	I0314 18:20:40.214540  960722 main.go:141] libmachine: (ha-105786-m03) Reserved static IP address: 192.168.39.190
	I0314 18:20:40.214566  960722 main.go:141] libmachine: (ha-105786-m03) Waiting for SSH to be available...
	I0314 18:20:40.214617  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Getting to WaitForSSH function...
	I0314 18:20:40.217661  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.218202  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:minikube Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.218242  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.218357  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Using SSH client type: external
	I0314 18:20:40.218385  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa (-rw-------)
	I0314 18:20:40.218418  960722 main.go:141] libmachine: (ha-105786-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 18:20:40.218436  960722 main.go:141] libmachine: (ha-105786-m03) DBG | About to run SSH command:
	I0314 18:20:40.218448  960722 main.go:141] libmachine: (ha-105786-m03) DBG | exit 0
	I0314 18:20:40.344502  960722 main.go:141] libmachine: (ha-105786-m03) DBG | SSH cmd err, output: <nil>: 
	I0314 18:20:40.344797  960722 main.go:141] libmachine: (ha-105786-m03) KVM machine creation complete!
	I0314 18:20:40.345142  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetConfigRaw
	I0314 18:20:40.345728  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:40.345981  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:40.346180  960722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 18:20:40.346197  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:20:40.347581  960722 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 18:20:40.347596  960722 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 18:20:40.347602  960722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 18:20:40.347609  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:40.350331  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.350772  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.350801  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.350966  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:40.351172  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.351341  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.351494  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:40.351708  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:40.352031  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:40.352047  960722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 18:20:40.451470  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:20:40.451502  960722 main.go:141] libmachine: Detecting the provisioner...
	I0314 18:20:40.451512  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:40.454593  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.455030  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.455061  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.455263  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:40.455466  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.455629  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.455748  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:40.455909  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:40.456082  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:40.456094  960722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 18:20:40.561434  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 18:20:40.561509  960722 main.go:141] libmachine: found compatible host: buildroot
	I0314 18:20:40.561523  960722 main.go:141] libmachine: Provisioning with buildroot...
	I0314 18:20:40.561535  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetMachineName
	I0314 18:20:40.561810  960722 buildroot.go:166] provisioning hostname "ha-105786-m03"
	I0314 18:20:40.561837  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetMachineName
	I0314 18:20:40.562093  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:40.564618  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.564980  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.565017  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.565118  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:40.565328  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.565529  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.565709  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:40.565881  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:40.566055  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:40.566068  960722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-105786-m03 && echo "ha-105786-m03" | sudo tee /etc/hostname
	I0314 18:20:40.684093  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786-m03
	
	I0314 18:20:40.684125  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:40.686884  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.687210  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.687241  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.687381  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:40.687571  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.687749  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.687905  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:40.688073  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:40.688261  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:40.688278  960722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-105786-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-105786-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-105786-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:20:40.797541  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:20:40.797575  960722 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:20:40.797596  960722 buildroot.go:174] setting up certificates
	I0314 18:20:40.797611  960722 provision.go:84] configureAuth start
	I0314 18:20:40.797623  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetMachineName
	I0314 18:20:40.797919  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:20:40.800767  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.801185  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.801218  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.801418  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:40.804200  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.804646  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.804679  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.804861  960722 provision.go:143] copyHostCerts
	I0314 18:20:40.804893  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:20:40.804925  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 18:20:40.804935  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:20:40.805001  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:20:40.805072  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:20:40.805089  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 18:20:40.805096  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:20:40.805119  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:20:40.805162  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:20:40.805178  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 18:20:40.805184  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:20:40.805203  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:20:40.805289  960722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.ha-105786-m03 san=[127.0.0.1 192.168.39.190 ha-105786-m03 localhost minikube]
	I0314 18:20:41.054914  960722 provision.go:177] copyRemoteCerts
	I0314 18:20:41.054977  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:20:41.055004  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:41.057639  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.057975  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.057998  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.058194  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.058387  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.058565  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.058698  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:20:41.144719  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 18:20:41.144803  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:20:41.171816  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 18:20:41.171887  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 18:20:41.199381  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 18:20:41.199468  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 18:20:41.226957  960722 provision.go:87] duration metric: took 429.333138ms to configureAuth
	I0314 18:20:41.226985  960722 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:20:41.227214  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:20:41.227307  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:41.229976  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.230427  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.230463  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.230679  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.230914  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.231089  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.231272  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.231498  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:41.231734  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:41.231753  960722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:20:41.512036  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:20:41.512069  960722 main.go:141] libmachine: Checking connection to Docker...
	I0314 18:20:41.512080  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetURL
	I0314 18:20:41.513530  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Using libvirt version 6000000
	I0314 18:20:41.516319  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.516759  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.516797  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.516916  960722 main.go:141] libmachine: Docker is up and running!
	I0314 18:20:41.516932  960722 main.go:141] libmachine: Reticulating splines...
	I0314 18:20:41.516952  960722 client.go:171] duration metric: took 25.496210948s to LocalClient.Create
	I0314 18:20:41.516990  960722 start.go:167] duration metric: took 25.49630446s to libmachine.API.Create "ha-105786"
	I0314 18:20:41.517002  960722 start.go:293] postStartSetup for "ha-105786-m03" (driver="kvm2")
	I0314 18:20:41.517019  960722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:20:41.517042  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:41.517289  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:20:41.517320  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:41.519485  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.519861  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.519889  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.520049  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.520272  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.520449  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.520604  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:20:41.605448  960722 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:20:41.610671  960722 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:20:41.610704  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:20:41.610786  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:20:41.610873  960722 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 18:20:41.610886  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /etc/ssl/certs/9513112.pem
	I0314 18:20:41.611012  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:20:41.621811  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:20:41.652036  960722 start.go:296] duration metric: took 135.017948ms for postStartSetup
	I0314 18:20:41.652091  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetConfigRaw
	I0314 18:20:41.652721  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:20:41.655406  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.655870  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.655905  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.656145  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:20:41.656406  960722 start.go:128] duration metric: took 25.655223653s to createHost
	I0314 18:20:41.656441  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:41.659003  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.659391  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.659412  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.659549  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.659758  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.659932  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.660071  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.660253  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:41.660456  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:41.660470  960722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:20:41.761414  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440441.743729611
	
	I0314 18:20:41.761440  960722 fix.go:216] guest clock: 1710440441.743729611
	I0314 18:20:41.761447  960722 fix.go:229] Guest: 2024-03-14 18:20:41.743729611 +0000 UTC Remote: 2024-03-14 18:20:41.656424316 +0000 UTC m=+158.810269334 (delta=87.305295ms)
	I0314 18:20:41.761464  960722 fix.go:200] guest clock delta is within tolerance: 87.305295ms
	I0314 18:20:41.761469  960722 start.go:83] releasing machines lock for "ha-105786-m03", held for 25.760417756s
	I0314 18:20:41.761487  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:41.761771  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:20:41.764594  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.764999  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.765031  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.767266  960722 out.go:177] * Found network options:
	I0314 18:20:41.768711  960722 out.go:177]   - NO_PROXY=192.168.39.170,192.168.39.245
	W0314 18:20:41.769997  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:20:41.770017  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:20:41.770030  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:41.770550  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:41.770778  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:41.770920  960722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:20:41.770963  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	W0314 18:20:41.770999  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:20:41.771026  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:20:41.771103  960722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:20:41.771128  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:41.773706  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.774056  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.774090  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.774108  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.774292  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.774468  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.774564  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.774589  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.774648  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.774759  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.774883  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:20:41.774977  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.775149  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.775315  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:20:42.013893  960722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:20:42.021607  960722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:20:42.021679  960722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:20:42.039748  960722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:20:42.039777  960722 start.go:494] detecting cgroup driver to use...
	I0314 18:20:42.039853  960722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:20:42.059119  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:20:42.074558  960722 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:20:42.074617  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:20:42.089661  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:20:42.104256  960722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:20:42.233317  960722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:20:42.392041  960722 docker.go:233] disabling docker service ...
	I0314 18:20:42.392130  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:20:42.408543  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:20:42.422591  960722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:20:42.563722  960722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:20:42.688792  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:20:42.704444  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:20:42.725324  960722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:20:42.725397  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:20:42.737561  960722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:20:42.737618  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:20:42.749624  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:20:42.761367  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:20:42.773962  960722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:20:42.786135  960722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:20:42.796972  960722 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 18:20:42.797027  960722 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 18:20:42.811989  960722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:20:42.822647  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:20:42.950792  960722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:20:43.106453  960722 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:20:43.106542  960722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:20:43.112384  960722 start.go:562] Will wait 60s for crictl version
	I0314 18:20:43.112441  960722 ssh_runner.go:195] Run: which crictl
	I0314 18:20:43.116759  960722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:20:43.158761  960722 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:20:43.158863  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:20:43.192334  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:20:43.229877  960722 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:20:43.231273  960722 out.go:177]   - env NO_PROXY=192.168.39.170
	I0314 18:20:43.232575  960722 out.go:177]   - env NO_PROXY=192.168.39.170,192.168.39.245
	I0314 18:20:43.233764  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:20:43.236996  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:43.237429  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:43.237458  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:43.237711  960722 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:20:43.242307  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:20:43.255831  960722 mustload.go:65] Loading cluster: ha-105786
	I0314 18:20:43.256090  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:20:43.256496  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:20:43.256558  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:20:43.272927  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41451
	I0314 18:20:43.273365  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:20:43.273806  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:20:43.273828  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:20:43.274143  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:20:43.274328  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:20:43.275764  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:20:43.276038  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:20:43.276073  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:20:43.290709  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38899
	I0314 18:20:43.291151  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:20:43.291649  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:20:43.291671  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:20:43.291987  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:20:43.292178  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:20:43.292379  960722 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786 for IP: 192.168.39.190
	I0314 18:20:43.292400  960722 certs.go:194] generating shared ca certs ...
	I0314 18:20:43.292421  960722 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:20:43.292562  960722 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:20:43.292601  960722 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:20:43.292610  960722 certs.go:256] generating profile certs ...
	I0314 18:20:43.292676  960722 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key
	I0314 18:20:43.292700  960722 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5008f644
	I0314 18:20:43.292714  960722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5008f644 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170 192.168.39.245 192.168.39.190 192.168.39.254]
	I0314 18:20:43.369573  960722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5008f644 ...
	I0314 18:20:43.369603  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5008f644: {Name:mk26652353e711860e9741d7f16cc8eff62446e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:20:43.369780  960722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5008f644 ...
	I0314 18:20:43.369798  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5008f644: {Name:mk3152f16716880926c7353afe7016ddf0844e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:20:43.369893  960722 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5008f644 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt
	I0314 18:20:43.370044  960722 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5008f644 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key
	I0314 18:20:43.370200  960722 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key
	I0314 18:20:43.370219  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:20:43.370245  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:20:43.370267  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:20:43.370286  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:20:43.370304  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:20:43.370322  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:20:43.370338  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:20:43.370356  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:20:43.370422  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 18:20:43.370464  960722 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 18:20:43.370478  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:20:43.370515  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:20:43.370547  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:20:43.370577  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:20:43.370632  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:20:43.370671  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:20:43.370693  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem -> /usr/share/ca-certificates/951311.pem
	I0314 18:20:43.370715  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /usr/share/ca-certificates/9513112.pem
	I0314 18:20:43.370757  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:20:43.373565  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:20:43.373911  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:20:43.373940  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:20:43.374033  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:20:43.374212  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:20:43.374353  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:20:43.374478  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:20:43.448486  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0314 18:20:43.453764  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0314 18:20:43.466380  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0314 18:20:43.470988  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0314 18:20:43.481676  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0314 18:20:43.486231  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0314 18:20:43.498692  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0314 18:20:43.503320  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0314 18:20:43.520987  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0314 18:20:43.526932  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0314 18:20:43.543635  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0314 18:20:43.548839  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0314 18:20:43.561552  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:20:43.590213  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:20:43.617864  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:20:43.645512  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:20:43.673965  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0314 18:20:43.700045  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 18:20:43.724879  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:20:43.751227  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 18:20:43.779325  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:20:43.805830  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 18:20:43.835382  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 18:20:43.865476  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0314 18:20:43.883583  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0314 18:20:43.902725  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0314 18:20:43.921088  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0314 18:20:43.938665  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0314 18:20:43.959055  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0314 18:20:43.978359  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0314 18:20:43.997134  960722 ssh_runner.go:195] Run: openssl version
	I0314 18:20:44.005825  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:20:44.018041  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:20:44.023061  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:20:44.023121  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:20:44.029198  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:20:44.041694  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 18:20:44.055167  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 18:20:44.060067  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:20:44.060127  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 18:20:44.066294  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 18:20:44.079907  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 18:20:44.093439  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 18:20:44.098784  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:20:44.098837  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 18:20:44.105059  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:20:44.117241  960722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:20:44.121721  960722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:20:44.121776  960722 kubeadm.go:928] updating node {m03 192.168.39.190 8443 v1.28.4 crio true true} ...
	I0314 18:20:44.121874  960722 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-105786-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:20:44.121899  960722 kube-vip.go:105] generating kube-vip config ...
	I0314 18:20:44.121930  960722 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:20:44.121988  960722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:20:44.132607  960722 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0314 18:20:44.132649  960722 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0314 18:20:44.144753  960722 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0314 18:20:44.144777  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:20:44.144823  960722 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0314 18:20:44.144848  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:20:44.144853  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:20:44.144963  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:20:44.144823  960722 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0314 18:20:44.145038  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:20:44.155969  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 18:20:44.155999  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0314 18:20:44.156019  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 18:20:44.156047  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0314 18:20:44.205345  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:20:44.205461  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:20:44.314449  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 18:20:44.314494  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0314 18:20:45.214882  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0314 18:20:45.226985  960722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0314 18:20:45.246323  960722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:20:45.265747  960722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0314 18:20:45.284264  960722 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:20:45.288879  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:20:45.302625  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:20:45.430647  960722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:20:45.449240  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:20:45.449585  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:20:45.449637  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:20:45.465079  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0314 18:20:45.465530  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:20:45.466048  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:20:45.466073  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:20:45.466441  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:20:45.466665  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:20:45.466918  960722 start.go:316] joinCluster: &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:20:45.467093  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 18:20:45.467117  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:20:45.470410  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:20:45.470818  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:20:45.470857  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:20:45.471038  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:20:45.471223  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:20:45.471367  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:20:45.471556  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:20:45.642229  960722 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:20:45.642294  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zioowz.ncs2n2q41aci2frn --discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-105786-m03 --control-plane --apiserver-advertise-address=192.168.39.190 --apiserver-bind-port=8443"
	I0314 18:21:14.250837  960722 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zioowz.ncs2n2q41aci2frn --discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-105786-m03 --control-plane --apiserver-advertise-address=192.168.39.190 --apiserver-bind-port=8443": (28.60851409s)
	I0314 18:21:14.250884  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 18:21:14.918133  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-105786-m03 minikube.k8s.io/updated_at=2024_03_14T18_21_14_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-105786 minikube.k8s.io/primary=false
	I0314 18:21:15.063537  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-105786-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0314 18:21:15.219595  960722 start.go:318] duration metric: took 29.752671612s to joinCluster
	I0314 18:21:15.219679  960722 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:21:15.221279  960722 out.go:177] * Verifying Kubernetes components...
	I0314 18:21:15.220130  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:21:15.222930  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:21:15.558226  960722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:21:15.584548  960722 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:21:15.584870  960722 kapi.go:59] client config for ha-105786: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key", CAFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0314 18:21:15.584938  960722 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.170:8443
	I0314 18:21:15.585248  960722 node_ready.go:35] waiting up to 6m0s for node "ha-105786-m03" to be "Ready" ...
	I0314 18:21:15.585338  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:15.585351  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:15.585363  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:15.585370  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:15.589124  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:16.085923  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:16.085947  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:16.085984  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:16.085989  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:16.091301  960722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:21:16.585699  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:16.585731  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:16.585743  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:16.585751  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:16.589694  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:17.086243  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:17.086264  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:17.086272  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:17.086277  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:17.090180  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:17.585728  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:17.585755  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:17.585766  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:17.585772  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:17.589402  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:17.590184  960722 node_ready.go:53] node "ha-105786-m03" has status "Ready":"False"
	I0314 18:21:18.086352  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:18.086382  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:18.086391  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:18.086396  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:18.089751  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:18.585826  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:18.585849  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:18.585857  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:18.585869  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:18.589603  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:19.086199  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:19.086226  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:19.086237  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:19.086242  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:19.090256  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:19.585614  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:19.585638  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:19.585646  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:19.585650  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:19.589705  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:19.590523  960722 node_ready.go:53] node "ha-105786-m03" has status "Ready":"False"
	I0314 18:21:20.085912  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:20.085938  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.085947  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.085951  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.090230  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:20.090977  960722 node_ready.go:49] node "ha-105786-m03" has status "Ready":"True"
	I0314 18:21:20.090995  960722 node_ready.go:38] duration metric: took 4.505728967s for node "ha-105786-m03" to be "Ready" ...
	I0314 18:21:20.091004  960722 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:21:20.091076  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:21:20.091091  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.091100  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.091105  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.098186  960722 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:21:20.105087  960722 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.105177  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-cx8rc
	I0314 18:21:20.105188  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.105205  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.105215  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.108258  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.108884  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:20.108896  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.108902  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.108907  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.112261  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.112820  960722 pod_ready.go:92] pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:20.112837  960722 pod_ready.go:81] duration metric: took 7.728308ms for pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.112845  960722 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.112898  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jsddl
	I0314 18:21:20.112907  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.112913  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.112917  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.119213  960722 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:21:20.119815  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:20.119832  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.119838  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.119843  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.123274  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.123867  960722 pod_ready.go:92] pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:20.123883  960722 pod_ready.go:81] duration metric: took 11.0316ms for pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.123891  960722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.123936  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786
	I0314 18:21:20.123943  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.123950  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.123954  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.126695  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:21:20.127411  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:20.127427  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.127434  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.127441  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.130847  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.131509  960722 pod_ready.go:92] pod "etcd-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:20.131530  960722 pod_ready.go:81] duration metric: took 7.63204ms for pod "etcd-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.131541  960722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.131600  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:21:20.131612  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.131621  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.131629  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.135022  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.135511  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:20.135525  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.135532  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.135535  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.139674  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:20.142312  960722 pod_ready.go:92] pod "etcd-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:20.142331  960722 pod_ready.go:81] duration metric: took 10.781899ms for pod "etcd-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.142342  960722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.286690  960722 request.go:629] Waited for 144.264427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:20.286773  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:20.286778  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.286785  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.286789  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.290137  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.486312  960722 request.go:629] Waited for 195.438244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:20.486381  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:20.486387  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.486396  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.486402  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.491146  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:20.686288  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:20.686320  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.686332  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.686338  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.690392  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:20.886129  960722 request.go:629] Waited for 195.138224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:20.886203  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:20.886207  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.886215  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.886218  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.889915  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:21.143239  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:21.143263  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:21.143272  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:21.143276  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:21.147377  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:21.286413  960722 request.go:629] Waited for 138.318793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:21.286498  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:21.286510  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:21.286520  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:21.286528  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:21.290446  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:21.643256  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:21.643281  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:21.643289  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:21.643294  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:21.647657  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:21.686994  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:21.687013  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:21.687022  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:21.687033  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:21.690519  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:22.143217  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:22.143241  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:22.143249  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:22.143254  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:22.147683  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:22.148777  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:22.148795  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:22.148805  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:22.148810  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:22.153712  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:22.154465  960722 pod_ready.go:102] pod "etcd-ha-105786-m03" in "kube-system" namespace has status "Ready":"False"
	I0314 18:21:22.642789  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:22.642819  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:22.642830  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:22.642845  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:22.653779  960722 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 18:21:22.654524  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:22.654540  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:22.654549  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:22.654556  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:22.662537  960722 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:21:23.143313  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:23.143339  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:23.143350  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:23.143355  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:23.147905  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:23.148858  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:23.148878  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:23.148889  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:23.148895  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:23.152464  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:23.642960  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:23.642992  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:23.643004  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:23.643009  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:23.651505  960722 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:21:23.652300  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:23.652320  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:23.652327  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:23.652331  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:23.656566  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:24.142543  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:24.142567  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.142575  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.142579  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.146469  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.147168  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:24.147183  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.147190  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.147195  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.150467  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.642886  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:24.642910  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.642918  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.642922  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.646490  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.647195  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:24.647209  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.647217  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.647222  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.650491  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.650957  960722 pod_ready.go:92] pod "etcd-ha-105786-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:24.650979  960722 pod_ready.go:81] duration metric: took 4.508629759s for pod "etcd-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:24.650997  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:24.651052  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786
	I0314 18:21:24.651062  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.651069  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.651072  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.654107  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.655255  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:24.655276  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.655286  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.655293  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.657959  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:21:24.658539  960722 pod_ready.go:92] pod "kube-apiserver-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:24.658562  960722 pod_ready.go:81] duration metric: took 7.558089ms for pod "kube-apiserver-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:24.658574  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:24.686920  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m02
	I0314 18:21:24.686945  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.686959  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.686965  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.690465  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.886526  960722 request.go:629] Waited for 195.399001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:24.886599  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:24.886606  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.886618  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.886627  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.890692  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:24.891235  960722 pod_ready.go:92] pod "kube-apiserver-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:24.891256  960722 pod_ready.go:81] duration metric: took 232.674585ms for pod "kube-apiserver-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:24.891265  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:25.086643  960722 request.go:629] Waited for 195.304841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:25.086746  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:25.086758  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:25.086770  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:25.086781  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:25.091072  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:25.286900  960722 request.go:629] Waited for 194.900934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:25.286995  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:25.287002  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:25.287010  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:25.287017  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:25.290890  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:25.486216  960722 request.go:629] Waited for 94.675787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:25.486324  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:25.486336  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:25.486347  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:25.486351  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:25.490683  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:25.686301  960722 request.go:629] Waited for 194.320717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:25.686404  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:25.686413  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:25.686422  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:25.686430  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:25.690545  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:25.892206  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:25.892259  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:25.892272  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:25.892278  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:25.895673  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:26.086940  960722 request.go:629] Waited for 190.282738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:26.087018  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:26.087024  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:26.087032  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:26.087036  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:26.091882  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:26.391690  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:26.391712  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:26.391720  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:26.391730  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:26.395703  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:26.486175  960722 request.go:629] Waited for 89.650069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:26.486243  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:26.486250  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:26.486261  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:26.486271  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:26.489646  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:26.892192  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:26.892251  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:26.892264  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:26.892290  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:26.896240  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:26.897407  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:26.897426  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:26.897436  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:26.897443  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:26.900712  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:26.901357  960722 pod_ready.go:102] pod "kube-apiserver-ha-105786-m03" in "kube-system" namespace has status "Ready":"False"
	I0314 18:21:27.392547  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:27.392578  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:27.392591  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:27.392598  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:27.396799  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:27.397812  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:27.397841  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:27.397852  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:27.397858  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:27.401108  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:27.891460  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:27.891484  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:27.891491  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:27.891494  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:27.895037  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:27.895718  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:27.895740  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:27.895751  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:27.895757  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:27.899227  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:28.391887  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:28.391916  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:28.391928  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:28.391936  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:28.399750  960722 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:21:28.401603  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:28.401629  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:28.401640  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:28.401648  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:28.405076  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:28.891943  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:28.891977  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:28.891989  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:28.891997  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:28.896023  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:28.897007  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:28.897029  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:28.897036  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:28.897039  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:28.900075  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:29.392371  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:29.392396  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.392407  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.392413  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.396392  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:29.397535  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:29.397552  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.397559  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.397564  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.400717  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:29.401356  960722 pod_ready.go:102] pod "kube-apiserver-ha-105786-m03" in "kube-system" namespace has status "Ready":"False"
	I0314 18:21:29.891508  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:29.891533  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.891541  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.891546  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.895729  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:29.897024  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:29.897046  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.897057  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.897064  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.900241  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:29.900998  960722 pod_ready.go:92] pod "kube-apiserver-ha-105786-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:29.901022  960722 pod_ready.go:81] duration metric: took 5.009747113s for pod "kube-apiserver-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:29.901035  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:29.901103  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786
	I0314 18:21:29.901114  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.901124  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.901129  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.904264  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:29.905115  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:29.905132  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.905143  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.905148  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.907940  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:21:29.908513  960722 pod_ready.go:92] pod "kube-controller-manager-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:29.908530  960722 pod_ready.go:81] duration metric: took 7.488189ms for pod "kube-controller-manager-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:29.908539  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:29.908599  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786-m02
	I0314 18:21:29.908607  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.908614  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.908622  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.911967  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:30.086896  960722 request.go:629] Waited for 174.32646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:30.086970  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:30.086981  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:30.086993  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:30.086999  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:30.091325  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:30.091816  960722 pod_ready.go:92] pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:30.091840  960722 pod_ready.go:81] duration metric: took 183.294651ms for pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:30.091850  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:30.286334  960722 request.go:629] Waited for 194.37948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786-m03
	I0314 18:21:30.286420  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786-m03
	I0314 18:21:30.286426  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:30.286434  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:30.286446  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:30.290575  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:30.486791  960722 request.go:629] Waited for 195.30278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:30.486864  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:30.486875  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:30.486886  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:30.486894  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:30.490449  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:30.490992  960722 pod_ready.go:92] pod "kube-controller-manager-ha-105786-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:30.491014  960722 pod_ready.go:81] duration metric: took 399.156594ms for pod "kube-controller-manager-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:30.491025  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6rjsv" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:30.686013  960722 request.go:629] Waited for 194.876678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rjsv
	I0314 18:21:30.686073  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rjsv
	I0314 18:21:30.686078  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:30.686085  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:30.686089  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:30.690974  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:30.886508  960722 request.go:629] Waited for 194.08458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:30.886569  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:30.886574  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:30.886581  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:30.886585  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:30.890076  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:30.890748  960722 pod_ready.go:92] pod "kube-proxy-6rjsv" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:30.890766  960722 pod_ready.go:81] duration metric: took 399.734743ms for pod "kube-proxy-6rjsv" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:30.890776  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hd8mx" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:31.086344  960722 request.go:629] Waited for 195.462369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd8mx
	I0314 18:21:31.086404  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd8mx
	I0314 18:21:31.086410  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:31.086420  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:31.086426  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:31.090034  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:31.286093  960722 request.go:629] Waited for 195.283418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:31.286192  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:31.286203  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:31.286211  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:31.286215  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:31.289646  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:31.290391  960722 pod_ready.go:92] pod "kube-proxy-hd8mx" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:31.290411  960722 pod_ready.go:81] duration metric: took 399.629073ms for pod "kube-proxy-hd8mx" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:31.290422  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpz89" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:31.486491  960722 request.go:629] Waited for 195.976001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpz89
	I0314 18:21:31.486547  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpz89
	I0314 18:21:31.486552  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:31.486560  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:31.486565  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:31.490567  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:31.686972  960722 request.go:629] Waited for 195.411714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:31.687050  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:31.687061  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:31.687073  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:31.687106  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:31.691407  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:31.692074  960722 pod_ready.go:92] pod "kube-proxy-qpz89" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:31.692095  960722 pod_ready.go:81] duration metric: took 401.664857ms for pod "kube-proxy-qpz89" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:31.692104  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:31.886657  960722 request.go:629] Waited for 194.461564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786
	I0314 18:21:31.886737  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786
	I0314 18:21:31.886749  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:31.886782  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:31.886804  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:31.890792  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:32.086783  960722 request.go:629] Waited for 195.361491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:32.086854  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:32.086860  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.086870  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.086876  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.090475  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:32.091089  960722 pod_ready.go:92] pod "kube-scheduler-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:32.091111  960722 pod_ready.go:81] duration metric: took 398.998864ms for pod "kube-scheduler-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:32.091124  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:32.286267  960722 request.go:629] Waited for 195.058609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m02
	I0314 18:21:32.286353  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m02
	I0314 18:21:32.286366  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.286373  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.286377  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.291224  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:32.486368  960722 request.go:629] Waited for 193.418218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:32.486426  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:32.486447  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.486468  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.486489  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.491230  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:32.491931  960722 pod_ready.go:92] pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:32.491953  960722 pod_ready.go:81] duration metric: took 400.81441ms for pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:32.491966  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:32.685999  960722 request.go:629] Waited for 193.931338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m03
	I0314 18:21:32.686072  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m03
	I0314 18:21:32.686081  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.686095  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.686104  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.693447  960722 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:21:32.886470  960722 request.go:629] Waited for 192.192425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:32.886582  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:32.886595  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.886605  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.886613  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.890459  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:32.891038  960722 pod_ready.go:92] pod "kube-scheduler-ha-105786-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:32.891063  960722 pod_ready.go:81] duration metric: took 399.089136ms for pod "kube-scheduler-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:32.891078  960722 pod_ready.go:38] duration metric: took 12.800064442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:21:32.891095  960722 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:21:32.891160  960722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:21:32.910084  960722 api_server.go:72] duration metric: took 17.690361984s to wait for apiserver process to appear ...
	I0314 18:21:32.910107  960722 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:21:32.910130  960722 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0314 18:21:32.919423  960722 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I0314 18:21:32.919494  960722 round_trippers.go:463] GET https://192.168.39.170:8443/version
	I0314 18:21:32.919499  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.919507  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.919511  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.920660  960722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0314 18:21:32.920872  960722 api_server.go:141] control plane version: v1.28.4
	I0314 18:21:32.920895  960722 api_server.go:131] duration metric: took 10.779673ms to wait for apiserver health ...
	I0314 18:21:32.920906  960722 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:21:33.086608  960722 request.go:629] Waited for 165.598091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:21:33.086674  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:21:33.086681  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:33.086698  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:33.086710  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:33.094732  960722 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:21:33.104548  960722 system_pods.go:59] 24 kube-system pods found
	I0314 18:21:33.104578  960722 system_pods.go:61] "coredns-5dd5756b68-cx8rc" [d2e960de-67a9-4385-ba02-78a744602bcc] Running
	I0314 18:21:33.104583  960722 system_pods.go:61] "coredns-5dd5756b68-jsddl" [bdbdea16-97b0-4581-8bab-9a472af11004] Running
	I0314 18:21:33.104587  960722 system_pods.go:61] "etcd-ha-105786" [a11d4142-2853-45c8-9433-c4a5d54fc66c] Running
	I0314 18:21:33.104590  960722 system_pods.go:61] "etcd-ha-105786-m02" [5bce83df-04d4-476b-8869-746c5900563e] Running
	I0314 18:21:33.104593  960722 system_pods.go:61] "etcd-ha-105786-m03" [aaa4af53-3ee2-484a-8067-80dffaefd8ea] Running
	I0314 18:21:33.104596  960722 system_pods.go:61] "kindnet-9b2pr" [e23e9c49-0b7d-46ca-ae62-11e9b26a1280] Running
	I0314 18:21:33.104599  960722 system_pods.go:61] "kindnet-gmvl5" [a64e3967-f28a-4fdb-a5ee-da05c6aba46a] Running
	I0314 18:21:33.104602  960722 system_pods.go:61] "kindnet-vpgvl" [fcd2b2f2-848f-408e-8d28-fb54cf623210] Running
	I0314 18:21:33.104604  960722 system_pods.go:61] "kube-apiserver-ha-105786" [f8058168-3736-4279-9fa7-f4878d8361e1] Running
	I0314 18:21:33.104607  960722 system_pods.go:61] "kube-apiserver-ha-105786-m02" [5f8798ed-58cc-45c8-a83c-217d99d40769] Running
	I0314 18:21:33.104610  960722 system_pods.go:61] "kube-apiserver-ha-105786-m03" [1a142787-c591-472a-8e85-dbe976383bff] Running
	I0314 18:21:33.104614  960722 system_pods.go:61] "kube-controller-manager-ha-105786" [0a5ec36c-be15-4649-a8f8-1dd9c5a0b87b] Running
	I0314 18:21:33.104620  960722 system_pods.go:61] "kube-controller-manager-ha-105786-m02" [d6fe9aea-613c-4bd8-8cb4-e3ac732feaa9] Running
	I0314 18:21:33.104623  960722 system_pods.go:61] "kube-controller-manager-ha-105786-m03" [332be1f1-48d4-425a-beda-44fe074bac93] Running
	I0314 18:21:33.104626  960722 system_pods.go:61] "kube-proxy-6rjsv" [6e2b5963-5c97-4f70-999a-f01ad58822fc] Running
	I0314 18:21:33.104629  960722 system_pods.go:61] "kube-proxy-hd8mx" [3e003f67-93dd-4105-a7bd-68d9af563ea4] Running
	I0314 18:21:33.104632  960722 system_pods.go:61] "kube-proxy-qpz89" [ca6a156c-9589-4200-bcfe-1537251ac9e2] Running
	I0314 18:21:33.104635  960722 system_pods.go:61] "kube-scheduler-ha-105786" [032b8718-b475-4155-b83b-1d065123f53f] Running
	I0314 18:21:33.104638  960722 system_pods.go:61] "kube-scheduler-ha-105786-m02" [f397b74c-e61a-498b-807b-002474ce63b2] Running
	I0314 18:21:33.104644  960722 system_pods.go:61] "kube-scheduler-ha-105786-m03" [2bc9adb8-e64d-4b60-a392-5fa73d89b365] Running
	I0314 18:21:33.104651  960722 system_pods.go:61] "kube-vip-ha-105786" [b310c7b3-e9d7-4f98-8df8-fdfb9f7754f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.104658  960722 system_pods.go:61] "kube-vip-ha-105786-m02" [5caee046-92ea-4315-b240-84ce4553d64e] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.104667  960722 system_pods.go:61] "kube-vip-ha-105786-m03" [272b8465-c012-4e94-8142-2db4d20fd844] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.104674  960722 system_pods.go:61] "storage-provisioner" [566fc43f-5610-4dcd-b683-1cc87e6ed609] Running
	I0314 18:21:33.104681  960722 system_pods.go:74] duration metric: took 183.768694ms to wait for pod list to return data ...
	I0314 18:21:33.104691  960722 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:21:33.286741  960722 request.go:629] Waited for 181.965323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:21:33.286798  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:21:33.286803  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:33.286811  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:33.286816  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:33.291294  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:33.291440  960722 default_sa.go:45] found service account: "default"
	I0314 18:21:33.291455  960722 default_sa.go:55] duration metric: took 186.757524ms for default service account to be created ...
	I0314 18:21:33.291465  960722 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:21:33.485973  960722 request.go:629] Waited for 194.419337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:21:33.486045  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:21:33.486050  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:33.486061  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:33.486066  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:33.494649  960722 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:21:33.502794  960722 system_pods.go:86] 24 kube-system pods found
	I0314 18:21:33.502820  960722 system_pods.go:89] "coredns-5dd5756b68-cx8rc" [d2e960de-67a9-4385-ba02-78a744602bcc] Running
	I0314 18:21:33.502826  960722 system_pods.go:89] "coredns-5dd5756b68-jsddl" [bdbdea16-97b0-4581-8bab-9a472af11004] Running
	I0314 18:21:33.502831  960722 system_pods.go:89] "etcd-ha-105786" [a11d4142-2853-45c8-9433-c4a5d54fc66c] Running
	I0314 18:21:33.502834  960722 system_pods.go:89] "etcd-ha-105786-m02" [5bce83df-04d4-476b-8869-746c5900563e] Running
	I0314 18:21:33.502839  960722 system_pods.go:89] "etcd-ha-105786-m03" [aaa4af53-3ee2-484a-8067-80dffaefd8ea] Running
	I0314 18:21:33.502845  960722 system_pods.go:89] "kindnet-9b2pr" [e23e9c49-0b7d-46ca-ae62-11e9b26a1280] Running
	I0314 18:21:33.502851  960722 system_pods.go:89] "kindnet-gmvl5" [a64e3967-f28a-4fdb-a5ee-da05c6aba46a] Running
	I0314 18:21:33.502857  960722 system_pods.go:89] "kindnet-vpgvl" [fcd2b2f2-848f-408e-8d28-fb54cf623210] Running
	I0314 18:21:33.502867  960722 system_pods.go:89] "kube-apiserver-ha-105786" [f8058168-3736-4279-9fa7-f4878d8361e1] Running
	I0314 18:21:33.502875  960722 system_pods.go:89] "kube-apiserver-ha-105786-m02" [5f8798ed-58cc-45c8-a83c-217d99d40769] Running
	I0314 18:21:33.502884  960722 system_pods.go:89] "kube-apiserver-ha-105786-m03" [1a142787-c591-472a-8e85-dbe976383bff] Running
	I0314 18:21:33.502893  960722 system_pods.go:89] "kube-controller-manager-ha-105786" [0a5ec36c-be15-4649-a8f8-1dd9c5a0b87b] Running
	I0314 18:21:33.502900  960722 system_pods.go:89] "kube-controller-manager-ha-105786-m02" [d6fe9aea-613c-4bd8-8cb4-e3ac732feaa9] Running
	I0314 18:21:33.502905  960722 system_pods.go:89] "kube-controller-manager-ha-105786-m03" [332be1f1-48d4-425a-beda-44fe074bac93] Running
	I0314 18:21:33.502911  960722 system_pods.go:89] "kube-proxy-6rjsv" [6e2b5963-5c97-4f70-999a-f01ad58822fc] Running
	I0314 18:21:33.502916  960722 system_pods.go:89] "kube-proxy-hd8mx" [3e003f67-93dd-4105-a7bd-68d9af563ea4] Running
	I0314 18:21:33.502924  960722 system_pods.go:89] "kube-proxy-qpz89" [ca6a156c-9589-4200-bcfe-1537251ac9e2] Running
	I0314 18:21:33.502931  960722 system_pods.go:89] "kube-scheduler-ha-105786" [032b8718-b475-4155-b83b-1d065123f53f] Running
	I0314 18:21:33.502937  960722 system_pods.go:89] "kube-scheduler-ha-105786-m02" [f397b74c-e61a-498b-807b-002474ce63b2] Running
	I0314 18:21:33.502947  960722 system_pods.go:89] "kube-scheduler-ha-105786-m03" [2bc9adb8-e64d-4b60-a392-5fa73d89b365] Running
	I0314 18:21:33.502961  960722 system_pods.go:89] "kube-vip-ha-105786" [b310c7b3-e9d7-4f98-8df8-fdfb9f7754f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.502975  960722 system_pods.go:89] "kube-vip-ha-105786-m02" [5caee046-92ea-4315-b240-84ce4553d64e] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.502990  960722 system_pods.go:89] "kube-vip-ha-105786-m03" [272b8465-c012-4e94-8142-2db4d20fd844] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.502998  960722 system_pods.go:89] "storage-provisioner" [566fc43f-5610-4dcd-b683-1cc87e6ed609] Running
	I0314 18:21:33.503006  960722 system_pods.go:126] duration metric: took 211.531286ms to wait for k8s-apps to be running ...
	I0314 18:21:33.503016  960722 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:21:33.503076  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:21:33.521843  960722 system_svc.go:56] duration metric: took 18.815526ms WaitForService to wait for kubelet
	I0314 18:21:33.521874  960722 kubeadm.go:576] duration metric: took 18.302157253s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:21:33.521894  960722 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:21:33.686250  960722 request.go:629] Waited for 164.272977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes
	I0314 18:21:33.686344  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes
	I0314 18:21:33.686353  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:33.686366  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:33.686377  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:33.690388  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:33.691354  960722 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:21:33.691373  960722 node_conditions.go:123] node cpu capacity is 2
	I0314 18:21:33.691385  960722 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:21:33.691388  960722 node_conditions.go:123] node cpu capacity is 2
	I0314 18:21:33.691392  960722 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:21:33.691394  960722 node_conditions.go:123] node cpu capacity is 2
	I0314 18:21:33.691398  960722 node_conditions.go:105] duration metric: took 169.499624ms to run NodePressure ...
	I0314 18:21:33.691410  960722 start.go:240] waiting for startup goroutines ...
	I0314 18:21:33.691429  960722 start.go:254] writing updated cluster config ...
	I0314 18:21:33.691715  960722 ssh_runner.go:195] Run: rm -f paused
	I0314 18:21:33.746934  960722 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 18:21:33.749619  960722 out.go:177] * Done! kubectl is now configured to use "ha-105786" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.026349200Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=e9322915-0d48-4ecd-879b-3444a0e3177a name=/runtime.v1.RuntimeService/Version
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.027180066Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ab5bb211-c876-4f40-8045-80c9bfeead58 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.027655050Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710440710027631256,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ab5bb211-c876-4f40-8045-80c9bfeead58 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.028217167Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a85bee64-c220-4369-8626-487e1dde6e8e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.028308104Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a85bee64-c220-4369-8626-487e1dde6e8e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.028632769Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675,PodSandboxId:4ae35dda855682eeec7f084a8d99d1010b7179f7087ba308f7d247c1227273f8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440686319282766,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710440496387656125,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346673813878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346688507665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9012775f0e5a49216f3113a2768c6e706c86414b5a38c260ff1116270240082d,PodSandboxId:a9eac200df7fd34b55b63f683744daafbe18d969cc4358690346eadfe9ab91a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710440345547366091,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d,PodSandboxId:eebe65b95c46c73a769c55c952fec8c3f055e0dddb1f978ebccf161efd718342,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710440344039
031842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440342438467070,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440320770654871,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183,PodSandboxId:6cdd0dfbe22b09182df63156e8bf9f125eb3496d1621ccafae99e452d62e68dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710440320670830153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922,PodSandboxId:70a5a4aad21293218245a0bdcfcf2db4af26ed9ea1f2a700cdfe59582cd21157,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710440320597037665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernet
es.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440320556245114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a85bee64-c220-4369-8626-487e1dde6e8e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.068261266Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5ecbd392-fc94-4680-b478-498b8f05703a name=/runtime.v1.RuntimeService/Version
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.068799463Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5ecbd392-fc94-4680-b478-498b8f05703a name=/runtime.v1.RuntimeService/Version
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.070024622Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6db5231c-04c2-48a1-a9f6-dc58ab413fd0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.070788091Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710440710070760518,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6db5231c-04c2-48a1-a9f6-dc58ab413fd0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.071395144Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a05266c3-a3df-45d4-97f5-e0077143f2d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.071471905Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a05266c3-a3df-45d4-97f5-e0077143f2d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.071830049Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675,PodSandboxId:4ae35dda855682eeec7f084a8d99d1010b7179f7087ba308f7d247c1227273f8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440686319282766,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710440496387656125,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346673813878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346688507665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9012775f0e5a49216f3113a2768c6e706c86414b5a38c260ff1116270240082d,PodSandboxId:a9eac200df7fd34b55b63f683744daafbe18d969cc4358690346eadfe9ab91a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710440345547366091,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d,PodSandboxId:eebe65b95c46c73a769c55c952fec8c3f055e0dddb1f978ebccf161efd718342,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710440344039
031842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440342438467070,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440320770654871,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183,PodSandboxId:6cdd0dfbe22b09182df63156e8bf9f125eb3496d1621ccafae99e452d62e68dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710440320670830153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922,PodSandboxId:70a5a4aad21293218245a0bdcfcf2db4af26ed9ea1f2a700cdfe59582cd21157,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710440320597037665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernet
es.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440320556245114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a05266c3-a3df-45d4-97f5-e0077143f2d0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.120015489Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3be195d9-c1ad-4155-b099-be56bea553bb name=/runtime.v1.RuntimeService/Version
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.120116008Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3be195d9-c1ad-4155-b099-be56bea553bb name=/runtime.v1.RuntimeService/Version
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.125316198Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2fcd3dde-ba8a-474c-9d6f-c3d90949de72 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.127231535Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710440710127204924,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2fcd3dde-ba8a-474c-9d6f-c3d90949de72 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.127994958Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3500558a-5d54-42a7-a2aa-ef76d19c4405 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.128074424Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3500558a-5d54-42a7-a2aa-ef76d19c4405 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.128631051Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675,PodSandboxId:4ae35dda855682eeec7f084a8d99d1010b7179f7087ba308f7d247c1227273f8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440686319282766,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710440496387656125,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346673813878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346688507665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9012775f0e5a49216f3113a2768c6e706c86414b5a38c260ff1116270240082d,PodSandboxId:a9eac200df7fd34b55b63f683744daafbe18d969cc4358690346eadfe9ab91a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710440345547366091,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d,PodSandboxId:eebe65b95c46c73a769c55c952fec8c3f055e0dddb1f978ebccf161efd718342,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710440344039
031842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440342438467070,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440320770654871,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183,PodSandboxId:6cdd0dfbe22b09182df63156e8bf9f125eb3496d1621ccafae99e452d62e68dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710440320670830153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922,PodSandboxId:70a5a4aad21293218245a0bdcfcf2db4af26ed9ea1f2a700cdfe59582cd21157,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710440320597037665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernet
es.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440320556245114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3500558a-5d54-42a7-a2aa-ef76d19c4405 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.153028927Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=a64623a7-06f9-4061-93c8-1f2b5bd8215d name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.153821450Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-4h99c,Uid:6f1d3430-1aec-4155-8b75-951d851d54ae,Namespace:default,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710440495105633530,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:21:34.785609145Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jsddl,Uid:bdbdea16-97b0-4581-8bab-9a472af11004,Namespace:kube-system,Attempt:0,},Stat
e:SANDBOX_READY,CreatedAt:1710440346377492166,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:19:05.130015748Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-cx8rc,Uid:d2e960de-67a9-4385-ba02-78a744602bcc,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710440346368515418,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen
: 2024-03-14T18:19:05.130122921Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:a9eac200df7fd34b55b63f683744daafbe18d969cc4358690346eadfe9ab91a4,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:566fc43f-5610-4dcd-b683-1cc87e6ed609,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710440345436283855,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\
"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-14T18:19:05.127453312Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&PodSandboxMetadata{Name:kube-proxy-hd8mx,Uid:3e003f67-93dd-4105-a7bd-68d9af563ea4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710440342326310041,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[stri
ng]string{kubernetes.io/config.seen: 2024-03-14T18:19:00.519257450Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:eebe65b95c46c73a769c55c952fec8c3f055e0dddb1f978ebccf161efd718342,Metadata:&PodSandboxMetadata{Name:kindnet-9b2pr,Uid:e23e9c49-0b7d-46ca-ae62-11e9b26a1280,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710440341859455167,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:19:00.650209605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-105786,Uid:ec78945afcff39cee32fcf6f6d645c30,Namespace:kube-system,Attempt
:0,},State:SANDBOX_READY,CreatedAt:1710440320420085884,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ec78945afcff39cee32fcf6f6d645c30,kubernetes.io/config.seen: 2024-03-14T18:18:39.863408059Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:4ae35dda855682eeec7f084a8d99d1010b7179f7087ba308f7d247c1227273f8,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-105786,Uid:8a8d15e80402cb826977826234ee3c6a,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710440320397166125,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{kubernetes.io/config.hash: 8a
8d15e80402cb826977826234ee3c6a,kubernetes.io/config.seen: 2024-03-14T18:18:39.863408693Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:70a5a4aad21293218245a0bdcfcf2db4af26ed9ea1f2a700cdfe59582cd21157,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-105786,Uid:dc5e46764078ce514b56622c3d7888bf,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710440320381538889,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc5e46764078ce514b56622c3d7888bf,kubernetes.io/config.seen: 2024-03-14T18:18:39.863407001Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6cdd0dfbe22b09182df63156e8bf9f125eb3496d1621ccafae99e452d62e68dd,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-105786,
Uid:6dac53b7248a384afeccfc55d43bb2fb,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710440320365602419,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.170:8443,kubernetes.io/config.hash: 6dac53b7248a384afeccfc55d43bb2fb,kubernetes.io/config.seen: 2024-03-14T18:18:39.863405928Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&PodSandboxMetadata{Name:etcd-ha-105786,Uid:0cd908946f83a665c0ef77bb7bd5e5ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710440320359365444,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-105786
,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.170:2379,kubernetes.io/config.hash: 0cd908946f83a665c0ef77bb7bd5e5ea,kubernetes.io/config.seen: 2024-03-14T18:18:39.863402441Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=a64623a7-06f9-4061-93c8-1f2b5bd8215d name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.154675834Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8f3a85e8-b801-4033-9c03-a9eda4758187 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.154815222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8f3a85e8-b801-4033-9c03-a9eda4758187 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:25:10 ha-105786 crio[676]: time="2024-03-14 18:25:10.155067831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675,PodSandboxId:4ae35dda855682eeec7f084a8d99d1010b7179f7087ba308f7d247c1227273f8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440686319282766,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710440496387656125,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346673813878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346688507665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9012775f0e5a49216f3113a2768c6e706c86414b5a38c260ff1116270240082d,PodSandboxId:a9eac200df7fd34b55b63f683744daafbe18d969cc4358690346eadfe9ab91a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710440345547366091,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d,PodSandboxId:eebe65b95c46c73a769c55c952fec8c3f055e0dddb1f978ebccf161efd718342,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710440344039
031842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440342438467070,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440320770654871,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183,PodSandboxId:6cdd0dfbe22b09182df63156e8bf9f125eb3496d1621ccafae99e452d62e68dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710440320670830153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922,PodSandboxId:70a5a4aad21293218245a0bdcfcf2db4af26ed9ea1f2a700cdfe59582cd21157,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710440320597037665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernet
es.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440320556245114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8f3a85e8-b801-4033-9c03-a9eda4758187 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	ff5ec432d711f       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      23 seconds ago      Exited              kube-vip                  7                   4ae35dda85568       kube-vip-ha-105786
	522fa7bdb84ee       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   3 minutes ago       Running             busybox                   0                   c09b6e29d418a       busybox-5b5d89c9d6-4h99c
	b538852248364       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   c6041a600821e       coredns-5dd5756b68-cx8rc
	4fbdd8b34ac46       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      6 minutes ago       Running             coredns                   0                   880e93f2a3ed5       coredns-5dd5756b68-jsddl
	9012775f0e5a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      6 minutes ago       Running             storage-provisioner       0                   a9eac200df7fd       storage-provisioner
	fa5c51367cb91       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    6 minutes ago       Running             kindnet-cni               0                   eebe65b95c46c       kindnet-9b2pr
	50a3dcdc83e53       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      6 minutes ago       Running             kube-proxy                0                   6d43c44b3e99b       kube-proxy-hd8mx
	3f27ba9bd31a4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      6 minutes ago       Running             kube-scheduler            0                   ed2bf5bc80b8e       kube-scheduler-ha-105786
	dd5f374c12463       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      6 minutes ago       Running             kube-apiserver            0                   6cdd0dfbe22b0       kube-apiserver-ha-105786
	ee804d488d0b1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      6 minutes ago       Running             kube-controller-manager   0                   70a5a4aad2129       kube-controller-manager-ha-105786
	ff7528019bad0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      6 minutes ago       Running             etcd                      0                   c3fe1175987df       etcd-ha-105786
	
	
	==> coredns [4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817] <==
	[INFO] 10.244.1.2:41895 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000096424s
	[INFO] 10.244.0.4:59440 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000256482s
	[INFO] 10.244.0.4:45605 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227942s
	[INFO] 10.244.0.4:50087 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117895s
	[INFO] 10.244.2.2:47342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135824s
	[INFO] 10.244.2.2:51729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001908531s
	[INFO] 10.244.2.2:45347 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168024s
	[INFO] 10.244.2.2:47143 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299211s
	[INFO] 10.244.2.2:60120 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074392s
	[INFO] 10.244.2.2:54628 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165918s
	[INFO] 10.244.1.2:41886 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144702s
	[INFO] 10.244.1.2:39387 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001194423s
	[INFO] 10.244.1.2:54465 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00023379s
	[INFO] 10.244.1.2:58623 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124815s
	[INFO] 10.244.0.4:59741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067152s
	[INFO] 10.244.0.4:39798 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049996s
	[INFO] 10.244.0.4:39218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088514s
	[INFO] 10.244.2.2:53227 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129249s
	[INFO] 10.244.1.2:38289 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162454s
	[INFO] 10.244.1.2:39880 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157216s
	[INFO] 10.244.0.4:40457 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166755s
	[INFO] 10.244.0.4:47654 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165231s
	[INFO] 10.244.2.2:56922 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021872s
	[INFO] 10.244.2.2:55729 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082709s
	[INFO] 10.244.2.2:40076 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091316s
	
	
	==> coredns [b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b] <==
	[INFO] 10.244.1.2:32813 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002143596s
	[INFO] 10.244.0.4:47271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163596s
	[INFO] 10.244.0.4:48154 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003476861s
	[INFO] 10.244.0.4:49667 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158457s
	[INFO] 10.244.0.4:33929 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003764421s
	[INFO] 10.244.0.4:52979 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155659s
	[INFO] 10.244.2.2:39342 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199098s
	[INFO] 10.244.2.2:41642 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230102s
	[INFO] 10.244.1.2:54390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186353s
	[INFO] 10.244.1.2:53664 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001692621s
	[INFO] 10.244.1.2:43695 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092508s
	[INFO] 10.244.1.2:46229 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129166s
	[INFO] 10.244.0.4:59002 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104268s
	[INFO] 10.244.2.2:40041 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190606s
	[INFO] 10.244.2.2:40444 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113686s
	[INFO] 10.244.2.2:35209 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000219373s
	[INFO] 10.244.1.2:37537 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135802s
	[INFO] 10.244.1.2:50389 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105335s
	[INFO] 10.244.0.4:53486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200332s
	[INFO] 10.244.0.4:53550 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000308188s
	[INFO] 10.244.2.2:59521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134191s
	[INFO] 10.244.1.2:43514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127501s
	[INFO] 10.244.1.2:54638 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089684s
	[INFO] 10.244.1.2:43811 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187377s
	[INFO] 10.244.1.2:38538 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164864s
	
	
	==> describe nodes <==
	Name:               ha-105786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_18_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:25:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:21:54 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:21:54 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:21:54 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:21:54 +0000   Thu, 14 Mar 2024 18:19:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-105786
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 83805f81be844e0c8f423f0d34e721b6
	  System UUID:                83805f81-be84-4e0c-8f42-3f0d34e721b6
	  Boot ID:                    592e9c66-43d6-494c-b6d9-c848f3c684fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4h99c             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 coredns-5dd5756b68-cx8rc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m10s
	  kube-system                 coredns-5dd5756b68-jsddl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     6m10s
	  kube-system                 etcd-ha-105786                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m22s
	  kube-system                 kindnet-9b2pr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m10s
	  kube-system                 kube-apiserver-ha-105786             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-controller-manager-ha-105786    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 kube-proxy-hd8mx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m10s
	  kube-system                 kube-scheduler-ha-105786             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m20s
	  kube-system                 kube-vip-ha-105786                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m22s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m9s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m7s                   kube-proxy       
	  Normal  Starting                 6m31s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  6m31s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     6m30s (x7 over 6m31s)  kubelet          Node ha-105786 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  6m30s (x8 over 6m31s)  kubelet          Node ha-105786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m30s (x8 over 6m31s)  kubelet          Node ha-105786 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 6m20s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  6m20s                  kubelet          Node ha-105786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m20s                  kubelet          Node ha-105786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m20s                  kubelet          Node ha-105786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  6m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           6m10s                  node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal  NodeReady                6m5s                   kubelet          Node ha-105786 status is now: NodeReady
	  Normal  RegisteredNode           4m52s                  node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal  RegisteredNode           3m41s                  node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	
	
	Name:               ha-105786-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_20_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:19:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:22:40 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 14 Mar 2024 18:21:59 +0000   Thu, 14 Mar 2024 18:23:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 14 Mar 2024 18:21:59 +0000   Thu, 14 Mar 2024 18:23:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 14 Mar 2024 18:21:59 +0000   Thu, 14 Mar 2024 18:23:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 14 Mar 2024 18:21:59 +0000   Thu, 14 Mar 2024 18:23:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-105786-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d19ca741ee10483194e2397e40db9727
	  System UUID:                d19ca741-ee10-4831-94e2-397e40db9727
	  Boot ID:                    5733374b-8d82-4a03-be20-977a16629e81
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-k6gxp                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-ha-105786-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         5m9s
	  kube-system                 kindnet-vpgvl                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      5m17s
	  kube-system                 kube-apiserver-ha-105786-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m55s
	  kube-system                 kube-controller-manager-ha-105786-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m1s
	  kube-system                 kube-proxy-qpz89                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m17s
	  kube-system                 kube-scheduler-ha-105786-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m4s
	  kube-system                 kube-vip-ha-105786-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        5m5s   kube-proxy       
	  Normal  RegisteredNode  4m52s  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  RegisteredNode  3m41s  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  NodeNotReady    106s   node-controller  Node ha-105786-m02 status is now: NodeNotReady
	
	
	Name:               ha-105786-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_21_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:21:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:25:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:21:41 +0000   Thu, 14 Mar 2024 18:21:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:21:41 +0000   Thu, 14 Mar 2024 18:21:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:21:41 +0000   Thu, 14 Mar 2024 18:21:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:21:41 +0000   Thu, 14 Mar 2024 18:21:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    ha-105786-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 52afc1e0396540bd95588eb0b4583ac2
	  System UUID:                52afc1e0-3965-40bd-9558-8eb0b4583ac2
	  Boot ID:                    49a72d33-484d-4235-87f0-aa586a313300
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-g4zv5                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m36s
	  kube-system                 etcd-ha-105786-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         3m59s
	  kube-system                 kindnet-gmvl5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m
	  kube-system                 kube-apiserver-ha-105786-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m59s
	  kube-system                 kube-controller-manager-ha-105786-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m52s
	  kube-system                 kube-proxy-6rjsv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m
	  kube-system                 kube-scheduler-ha-105786-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	  kube-system                 kube-vip-ha-105786-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        3m57s  kube-proxy       
	  Normal  RegisteredNode  3m57s  node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	  Normal  RegisteredNode  3m55s  node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	  Normal  RegisteredNode  3m41s  node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	
	
	Name:               ha-105786-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_22_12_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:25:03 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:24:24 +0000   Thu, 14 Mar 2024 18:24:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:24:24 +0000   Thu, 14 Mar 2024 18:24:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:24:24 +0000   Thu, 14 Mar 2024 18:24:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:24:24 +0000   Thu, 14 Mar 2024 18:24:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    ha-105786-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e09570d6bc045a59dfec434fd490a91
	  System UUID:                7e09570d-6bc0-45a5-9dfe-c434fd490a91
	  Boot ID:                    428a1228-6299-4b38-9548-9e1cc3a10d5e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fzjdr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      2m58s
	  kube-system                 kube-proxy-bftws    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         2m58s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 2m54s                kube-proxy       
	  Normal  RegisteredNode           2m57s                node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal  RegisteredNode           2m56s                node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal  RegisteredNode           2m55s                node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal  NodeNotReady             116s                 node-controller  Node ha-105786-m04 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  46s (x6 over 3m)     kubelet          Node ha-105786-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    46s (x6 over 3m)     kubelet          Node ha-105786-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     46s (x6 over 3m)     kubelet          Node ha-105786-m04 status is now: NodeHasSufficientPID
	  Normal  NodeReady                46s (x2 over 2m51s)  kubelet          Node ha-105786-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar14 18:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051627] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042750] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.571119] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.520173] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.684873] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.323219] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.062088] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057119] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.191786] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.127293] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.261879] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +5.345908] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.065032] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.795309] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.848496] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.157868] kauditd_printk_skb: 51 callbacks suppressed
	[  +2.914153] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[Mar14 18:19] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.437406] kauditd_printk_skb: 73 callbacks suppressed
	
	
	==> etcd [ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc] <==
	{"level":"warn","ts":"2024-03-14T18:25:10.182378Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.195518Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.294931Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.395047Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.438263Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.447201Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.454207Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.464865Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.47291Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.481812Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.491163Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.495962Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.496173Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.500293Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.509488Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.517993Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.525386Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.531138Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.536244Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.569392Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.576623Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.595873Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.616058Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.633967Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:25:10.695341Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:25:10 up 7 min,  0 users,  load average: 0.31, 0.23, 0.11
	Linux ha-105786 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d] <==
	I0314 18:24:33.567754       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:24:43.580576       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:24:43.580643       1 main.go:227] handling current node
	I0314 18:24:43.580657       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:24:43.580667       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:24:43.580985       1 main.go:223] Handling node with IPs: map[192.168.39.190:{}]
	I0314 18:24:43.581035       1 main.go:250] Node ha-105786-m03 has CIDR [10.244.2.0/24] 
	I0314 18:24:43.581124       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:24:43.581165       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:24:53.596981       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:24:53.597038       1 main.go:227] handling current node
	I0314 18:24:53.597052       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:24:53.597060       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:24:53.597207       1 main.go:223] Handling node with IPs: map[192.168.39.190:{}]
	I0314 18:24:53.597248       1 main.go:250] Node ha-105786-m03 has CIDR [10.244.2.0/24] 
	I0314 18:24:53.597327       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:24:53.597396       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:25:03.612151       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:25:03.612194       1 main.go:227] handling current node
	I0314 18:25:03.612204       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:25:03.612210       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:25:03.612336       1 main.go:223] Handling node with IPs: map[192.168.39.190:{}]
	I0314 18:25:03.612365       1 main.go:250] Node ha-105786-m03 has CIDR [10.244.2.0/24] 
	I0314 18:25:03.612434       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:25:03.612464       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183] <==
	Trace[752158558]:  ---"Txn call succeeded" 6345ms (18:20:03.184)]
	Trace[752158558]: [6.347024177s] [6.347024177s] END
	I0314 18:20:03.186045       1 trace.go:236] Trace[1885140667]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:fbc94115-6d0b-414d-ad84-c5eeaf26d5ea,client:192.168.39.254,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-105786,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (14-Mar-2024 18:20:01.502) (total time: 1683ms):
	Trace[1885140667]: ["GuaranteedUpdate etcd3" audit-id:fbc94115-6d0b-414d-ad84-c5eeaf26d5ea,key:/leases/kube-node-lease/ha-105786,type:*coordination.Lease,resource:leases.coordination.k8s.io 1683ms (18:20:01.502)
	Trace[1885140667]:  ---"Txn call completed" 1680ms (18:20:03.185)]
	Trace[1885140667]: [1.683699125s] [1.683699125s] END
	E0314 18:20:03.190572       1 controller.go:193] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"apiserver-jihrvibd4oxtendqa56r4cz4ky\": the object has been modified; please apply your changes to the latest version and try again"
	I0314 18:20:03.249348       1 trace.go:236] Trace[68449238]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:52a71337-9df0-4a51-bb09-73302b2702d3,client:192.168.39.245,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (14-Mar-2024 18:20:01.641) (total time: 1608ms):
	Trace[68449238]: [1.608286428s] [1.608286428s] END
	E0314 18:20:33.381836       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0314 18:20:33.382313       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0314 18:20:33.383116       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0314 18:20:33.384105       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0314 18:20:33.384243       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 1.790229ms, panicked: false, err: context canceled, panic-reason: <nil>
	E0314 18:20:33.384279       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0314 18:20:33.384943       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0314 18:20:33.385436       1 timeout.go:142] post-timeout activity - time-elapsed: 3.035341ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-105786" result: <nil>
	E0314 18:20:33.385553       1 timeout.go:142] post-timeout activity - time-elapsed: 4.896713ms, GET "/api/v1/nodes/ha-105786" result: <nil>
	E0314 18:21:38.394893       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.39.170:52880->192.168.39.245:10250: write: broken pipe
	E0314 18:22:05.011203       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 4.988µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0314 18:22:05.011992       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0314 18:22:05.013237       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0314 18:22:05.013320       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0314 18:22:05.014570       1 timeout.go:142] post-timeout activity - time-elapsed: 2.578598ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result: <nil>
	W0314 18:22:55.888230       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.170 192.168.39.190]
	
	
	==> kube-controller-manager [ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922] <==
	I0314 18:21:35.048242       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-l2jg8"
	I0314 18:21:35.256340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="364.425814ms"
	I0314 18:21:35.314007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.587399ms"
	I0314 18:21:35.314328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="142.141µs"
	I0314 18:21:36.549022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="12.837461ms"
	I0314 18:21:36.550338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="52.453µs"
	I0314 18:21:37.261244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="43.412621ms"
	I0314 18:21:37.261342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="43.654µs"
	I0314 18:21:37.412161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="36.802386ms"
	I0314 18:21:37.412290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.324µs"
	E0314 18:22:10.691761       1 certificate_controller.go:146] Sync csr-8njl4 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-8njl4": the object has been modified; please apply your changes to the latest version and try again
	I0314 18:22:12.151077       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-105786-m04\" does not exist"
	I0314 18:22:12.202408       1 range_allocator.go:380] "Set node PodCIDR" node="ha-105786-m04" podCIDRs=["10.244.3.0/24"]
	I0314 18:22:12.208056       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fzjdr"
	I0314 18:22:12.208230       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sx7t4"
	I0314 18:22:12.451342       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-jkwnt"
	I0314 18:22:12.466914       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-xpmtb"
	I0314 18:22:12.478959       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-sx7t4"
	I0314 18:22:12.501278       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-jsmlk"
	I0314 18:22:15.331369       1 event.go:307] "Event occurred" object="ha-105786-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller"
	I0314 18:22:15.349015       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-105786-m04"
	I0314 18:22:19.420413       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-105786-m04"
	I0314 18:23:24.830507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.57665ms"
	I0314 18:23:24.830649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="55.522µs"
	I0314 18:24:24.114447       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-105786-m04"
	
	
	==> kube-proxy [50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9] <==
	I0314 18:19:02.648031       1 server_others.go:69] "Using iptables proxy"
	I0314 18:19:02.664250       1 node.go:141] Successfully retrieved node IP: 192.168.39.170
	I0314 18:19:02.714117       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:19:02.714162       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:19:02.716766       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:19:02.717745       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:19:02.718063       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:19:02.718100       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:19:02.720154       1 config.go:188] "Starting service config controller"
	I0314 18:19:02.720408       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:19:02.720465       1 config.go:315] "Starting node config controller"
	I0314 18:19:02.720491       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:19:02.721511       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:19:02.721558       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:19:02.820597       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:19:02.820664       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:19:02.822840       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2] <==
	E0314 18:21:34.741225       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-k6gxp\": pod busybox-5b5d89c9d6-k6gxp is already assigned to node \"ha-105786-m02\"" pod="default/busybox-5b5d89c9d6-k6gxp"
	I0314 18:21:34.741262       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-k6gxp" node="ha-105786-m02"
	E0314 18:21:34.797062       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-4h99c\": pod busybox-5b5d89c9d6-4h99c is already assigned to node \"ha-105786\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-4h99c" node="ha-105786"
	E0314 18:21:34.797612       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-4h99c\": pod busybox-5b5d89c9d6-4h99c is already assigned to node \"ha-105786\"" pod="default/busybox-5b5d89c9d6-4h99c"
	I0314 18:21:34.797846       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-4h99c" node="ha-105786"
	E0314 18:21:34.797954       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-g4zv5\": pod busybox-5b5d89c9d6-g4zv5 is already assigned to node \"ha-105786-m03\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-g4zv5" node="ha-105786-m03"
	E0314 18:21:34.798050       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 5d3f92bb-be7f-4f0b-9815-0fa785ea455b(default/busybox-5b5d89c9d6-g4zv5) wasn't assumed so cannot be forgotten"
	E0314 18:21:34.798200       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-g4zv5\": pod busybox-5b5d89c9d6-g4zv5 is already assigned to node \"ha-105786-m03\"" pod="default/busybox-5b5d89c9d6-g4zv5"
	I0314 18:21:34.798284       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-g4zv5" node="ha-105786-m03"
	E0314 18:22:12.237093       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sx7t4\": pod kube-proxy-sx7t4 is already assigned to node \"ha-105786-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sx7t4" node="ha-105786-m04"
	E0314 18:22:12.237517       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod cb4e3e62-340a-4367-bc4a-d72b68f1082a(kube-system/kube-proxy-sx7t4) wasn't assumed so cannot be forgotten"
	E0314 18:22:12.237620       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sx7t4\": pod kube-proxy-sx7t4 is already assigned to node \"ha-105786-m04\"" pod="kube-system/kube-proxy-sx7t4"
	I0314 18:22:12.237762       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sx7t4" node="ha-105786-m04"
	E0314 18:22:12.368112       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-xpmtb\": pod kindnet-xpmtb is already assigned to node \"ha-105786-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-xpmtb" node="ha-105786-m04"
	E0314 18:22:12.369085       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod e6de9f42-83ee-4db1-b0bb-152b8104c199(kube-system/kindnet-xpmtb) wasn't assumed so cannot be forgotten"
	E0314 18:22:12.371466       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-xpmtb\": pod kindnet-xpmtb is already assigned to node \"ha-105786-m04\"" pod="kube-system/kindnet-xpmtb"
	I0314 18:22:12.371648       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-xpmtb" node="ha-105786-m04"
	E0314 18:22:12.368963       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bftws\": pod kube-proxy-bftws is already assigned to node \"ha-105786-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bftws" node="ha-105786-m04"
	E0314 18:22:12.372902       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 4dfc4fa6-ad4c-4ac7-8330-98bb674b95bc(kube-system/kube-proxy-bftws) wasn't assumed so cannot be forgotten"
	E0314 18:22:12.372957       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bftws\": pod kube-proxy-bftws is already assigned to node \"ha-105786-m04\"" pod="kube-system/kube-proxy-bftws"
	I0314 18:22:12.373005       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bftws" node="ha-105786-m04"
	E0314 18:22:12.434579       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jsmlk\": pod kube-proxy-jsmlk is already assigned to node \"ha-105786-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jsmlk" node="ha-105786-m04"
	E0314 18:22:12.435013       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod f612bbff-e439-4f91-a45a-773f9f11c1b9(kube-system/kube-proxy-jsmlk) wasn't assumed so cannot be forgotten"
	E0314 18:22:12.435160       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jsmlk\": pod kube-proxy-jsmlk is already assigned to node \"ha-105786-m04\"" pod="kube-system/kube-proxy-jsmlk"
	I0314 18:22:12.435240       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jsmlk" node="ha-105786-m04"
	
	
	==> kubelet <==
	Mar 14 18:23:39 ha-105786 kubelet[1439]: E0314 18:23:39.304277    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:23:50 ha-105786 kubelet[1439]: E0314 18:23:50.362406    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:23:50 ha-105786 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:23:50 ha-105786 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:23:50 ha-105786 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:23:50 ha-105786 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:23:52 ha-105786 kubelet[1439]: I0314 18:23:52.304487    1439 scope.go:117] "RemoveContainer" containerID="b2e92a5caf833aff399eab29d210e9040ecfa089f408e3eed3060c0d9c9a9e6a"
	Mar 14 18:23:52 ha-105786 kubelet[1439]: E0314 18:23:52.304874    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:24:07 ha-105786 kubelet[1439]: I0314 18:24:07.304230    1439 scope.go:117] "RemoveContainer" containerID="b2e92a5caf833aff399eab29d210e9040ecfa089f408e3eed3060c0d9c9a9e6a"
	Mar 14 18:24:07 ha-105786 kubelet[1439]: E0314 18:24:07.304917    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:24:19 ha-105786 kubelet[1439]: I0314 18:24:19.303886    1439 scope.go:117] "RemoveContainer" containerID="b2e92a5caf833aff399eab29d210e9040ecfa089f408e3eed3060c0d9c9a9e6a"
	Mar 14 18:24:19 ha-105786 kubelet[1439]: E0314 18:24:19.304517    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:24:32 ha-105786 kubelet[1439]: I0314 18:24:32.304298    1439 scope.go:117] "RemoveContainer" containerID="b2e92a5caf833aff399eab29d210e9040ecfa089f408e3eed3060c0d9c9a9e6a"
	Mar 14 18:24:32 ha-105786 kubelet[1439]: E0314 18:24:32.304633    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:24:46 ha-105786 kubelet[1439]: I0314 18:24:46.307273    1439 scope.go:117] "RemoveContainer" containerID="b2e92a5caf833aff399eab29d210e9040ecfa089f408e3eed3060c0d9c9a9e6a"
	Mar 14 18:24:50 ha-105786 kubelet[1439]: E0314 18:24:50.364258    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:24:50 ha-105786 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:24:50 ha-105786 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:24:50 ha-105786 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:24:50 ha-105786 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:24:53 ha-105786 kubelet[1439]: I0314 18:24:53.075825    1439 scope.go:117] "RemoveContainer" containerID="b2e92a5caf833aff399eab29d210e9040ecfa089f408e3eed3060c0d9c9a9e6a"
	Mar 14 18:24:53 ha-105786 kubelet[1439]: I0314 18:24:53.076189    1439 scope.go:117] "RemoveContainer" containerID="ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675"
	Mar 14 18:24:53 ha-105786 kubelet[1439]: E0314 18:24:53.076493    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:25:07 ha-105786 kubelet[1439]: I0314 18:25:07.304434    1439 scope.go:117] "RemoveContainer" containerID="ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675"
	Mar 14 18:25:07 ha-105786 kubelet[1439]: E0314 18:25:07.304802    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-105786 -n ha-105786
helpers_test.go:261: (dbg) Run:  kubectl --context ha-105786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/StopSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/StopSecondaryNode (150.48s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartSecondaryNode (55.68s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartSecondaryNode
ha_test.go:420: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 node start m02 -v=7 --alsologtostderr
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr: exit status 3 (3.204580416s)

                                                
                                                
-- stdout --
	ha-105786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-105786-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:25:15.331004  965093 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:25:15.331126  965093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:15.331130  965093 out.go:304] Setting ErrFile to fd 2...
	I0314 18:25:15.331135  965093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:15.331359  965093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:25:15.331554  965093 out.go:298] Setting JSON to false
	I0314 18:25:15.331587  965093 mustload.go:65] Loading cluster: ha-105786
	I0314 18:25:15.331716  965093 notify.go:220] Checking for updates...
	I0314 18:25:15.331937  965093 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:25:15.331953  965093 status.go:255] checking status of ha-105786 ...
	I0314 18:25:15.332377  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:15.332432  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:15.351416  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41597
	I0314 18:25:15.351853  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:15.352457  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:15.352478  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:15.352901  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:15.353151  965093 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:25:15.354788  965093 status.go:330] ha-105786 host status = "Running" (err=<nil>)
	I0314 18:25:15.354808  965093 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:15.355096  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:15.355127  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:15.370300  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34993
	I0314 18:25:15.370712  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:15.371214  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:15.371235  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:15.371550  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:15.371734  965093 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:25:15.374278  965093 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:15.374696  965093 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:15.374723  965093 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:15.374899  965093 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:15.375191  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:15.375243  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:15.390259  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36041
	I0314 18:25:15.390701  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:15.391208  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:15.391227  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:15.391585  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:15.391780  965093 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:25:15.392034  965093 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:15.392080  965093 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:25:15.394864  965093 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:15.395327  965093 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:15.395369  965093 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:15.395467  965093 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:25:15.395636  965093 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:25:15.395832  965093 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:25:15.395993  965093 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:25:15.476997  965093 ssh_runner.go:195] Run: systemctl --version
	I0314 18:25:15.485225  965093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:15.500437  965093 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:15.500465  965093 api_server.go:166] Checking apiserver status ...
	I0314 18:25:15.500503  965093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:15.516308  965093 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0314 18:25:15.528338  965093 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:15.528387  965093 ssh_runner.go:195] Run: ls
	I0314 18:25:15.533604  965093 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:15.541938  965093 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:15.541960  965093 status.go:422] ha-105786 apiserver status = Running (err=<nil>)
	I0314 18:25:15.541971  965093 status.go:257] ha-105786 status: &{Name:ha-105786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:15.541990  965093 status.go:255] checking status of ha-105786-m02 ...
	I0314 18:25:15.542310  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:15.542349  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:15.558704  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39679
	I0314 18:25:15.559117  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:15.559624  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:15.559644  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:15.560002  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:15.560265  965093 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:25:15.561894  965093 status.go:330] ha-105786-m02 host status = "Running" (err=<nil>)
	I0314 18:25:15.561910  965093 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:25:15.562274  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:15.562324  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:15.577835  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41643
	I0314 18:25:15.578392  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:15.578942  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:15.578964  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:15.579243  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:15.579411  965093 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:25:15.582076  965093 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:15.582487  965093 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:25:15.582507  965093 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:15.582667  965093 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:25:15.582956  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:15.582994  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:15.597321  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39559
	I0314 18:25:15.597704  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:15.598193  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:15.598219  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:15.598541  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:15.598749  965093 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:25:15.598982  965093 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:15.599005  965093 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:25:15.601469  965093 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:15.601822  965093 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:25:15.601853  965093 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:15.601994  965093 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:25:15.602164  965093 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:25:15.602367  965093 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:25:15.602490  965093 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	W0314 18:25:18.104557  965093 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0314 18:25:18.104694  965093 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0314 18:25:18.104718  965093 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:18.104725  965093 status.go:257] ha-105786-m02 status: &{Name:ha-105786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0314 18:25:18.104746  965093 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:18.104767  965093 status.go:255] checking status of ha-105786-m03 ...
	I0314 18:25:18.105170  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:18.105221  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:18.121160  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42349
	I0314 18:25:18.121655  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:18.122184  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:18.122214  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:18.122557  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:18.122776  965093 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:25:18.124562  965093 status.go:330] ha-105786-m03 host status = "Running" (err=<nil>)
	I0314 18:25:18.124582  965093 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:18.124865  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:18.124899  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:18.140997  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40393
	I0314 18:25:18.141380  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:18.141890  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:18.141915  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:18.142228  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:18.142479  965093 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:25:18.145215  965093 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:18.145646  965093 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:18.145674  965093 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:18.145767  965093 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:18.146139  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:18.146181  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:18.160361  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34755
	I0314 18:25:18.160731  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:18.161191  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:18.161215  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:18.161541  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:18.161766  965093 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:25:18.161959  965093 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:18.161980  965093 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:25:18.164346  965093 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:18.164778  965093 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:18.164813  965093 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:18.164974  965093 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:25:18.165152  965093 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:25:18.165394  965093 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:25:18.165561  965093 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:25:18.249204  965093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:18.268714  965093 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:18.268743  965093 api_server.go:166] Checking apiserver status ...
	I0314 18:25:18.268776  965093 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:18.284537  965093 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0314 18:25:18.294992  965093 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:18.295040  965093 ssh_runner.go:195] Run: ls
	I0314 18:25:18.300271  965093 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:18.307851  965093 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:18.307872  965093 status.go:422] ha-105786-m03 apiserver status = Running (err=<nil>)
	I0314 18:25:18.307881  965093 status.go:257] ha-105786-m03 status: &{Name:ha-105786-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:18.307896  965093 status.go:255] checking status of ha-105786-m04 ...
	I0314 18:25:18.308188  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:18.308239  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:18.324083  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46177
	I0314 18:25:18.324508  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:18.325007  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:18.325032  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:18.325401  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:18.325610  965093 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:25:18.327417  965093 status.go:330] ha-105786-m04 host status = "Running" (err=<nil>)
	I0314 18:25:18.327434  965093 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:18.327766  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:18.327811  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:18.342294  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33477
	I0314 18:25:18.342767  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:18.343310  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:18.343337  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:18.343641  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:18.343818  965093 main.go:141] libmachine: (ha-105786-m04) Calling .GetIP
	I0314 18:25:18.346841  965093 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:18.347507  965093 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:18.347545  965093 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:18.347716  965093 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:18.347994  965093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:18.348030  965093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:18.362850  965093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45073
	I0314 18:25:18.363287  965093 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:18.363762  965093 main.go:141] libmachine: Using API Version  1
	I0314 18:25:18.363790  965093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:18.364242  965093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:18.364452  965093 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:25:18.364702  965093 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:18.364729  965093 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:25:18.367984  965093 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:18.368608  965093 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:18.368631  965093 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:18.368863  965093 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:25:18.369070  965093 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:25:18.369254  965093 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:25:18.369398  965093 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:25:18.456794  965093 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:18.474434  965093 status.go:257] ha-105786-m04 status: &{Name:ha-105786-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr: exit status 3 (5.144407236s)

                                                
                                                
-- stdout --
	ha-105786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-105786-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:25:19.537305  965188 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:25:19.537535  965188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:19.537545  965188 out.go:304] Setting ErrFile to fd 2...
	I0314 18:25:19.537549  965188 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:19.537728  965188 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:25:19.537908  965188 out.go:298] Setting JSON to false
	I0314 18:25:19.537940  965188 mustload.go:65] Loading cluster: ha-105786
	I0314 18:25:19.538064  965188 notify.go:220] Checking for updates...
	I0314 18:25:19.538368  965188 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:25:19.538411  965188 status.go:255] checking status of ha-105786 ...
	I0314 18:25:19.539106  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:19.539191  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:19.561698  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42255
	I0314 18:25:19.562165  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:19.562771  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:19.562796  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:19.563132  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:19.563352  965188 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:25:19.564949  965188 status.go:330] ha-105786 host status = "Running" (err=<nil>)
	I0314 18:25:19.564972  965188 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:19.565291  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:19.565330  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:19.580239  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44319
	I0314 18:25:19.580684  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:19.581099  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:19.581135  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:19.581599  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:19.581814  965188 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:25:19.584302  965188 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:19.584727  965188 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:19.584766  965188 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:19.584891  965188 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:19.585170  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:19.585210  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:19.599787  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40033
	I0314 18:25:19.600175  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:19.600601  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:19.600627  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:19.600980  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:19.601274  965188 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:25:19.601548  965188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:19.601571  965188 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:25:19.604089  965188 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:19.604553  965188 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:19.604588  965188 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:19.604744  965188 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:25:19.604946  965188 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:25:19.605093  965188 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:25:19.605203  965188 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:25:19.688811  965188 ssh_runner.go:195] Run: systemctl --version
	I0314 18:25:19.696099  965188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:19.712184  965188 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:19.712227  965188 api_server.go:166] Checking apiserver status ...
	I0314 18:25:19.712270  965188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:19.733382  965188 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0314 18:25:19.744813  965188 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:19.744856  965188 ssh_runner.go:195] Run: ls
	I0314 18:25:19.752381  965188 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:19.760657  965188 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:19.760683  965188 status.go:422] ha-105786 apiserver status = Running (err=<nil>)
	I0314 18:25:19.760694  965188 status.go:257] ha-105786 status: &{Name:ha-105786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:19.760720  965188 status.go:255] checking status of ha-105786-m02 ...
	I0314 18:25:19.761034  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:19.761076  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:19.776414  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45157
	I0314 18:25:19.776926  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:19.777599  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:19.777625  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:19.778042  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:19.778291  965188 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:25:19.780149  965188 status.go:330] ha-105786-m02 host status = "Running" (err=<nil>)
	I0314 18:25:19.780168  965188 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:25:19.780549  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:19.780610  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:19.795963  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42557
	I0314 18:25:19.796510  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:19.797026  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:19.797051  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:19.797379  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:19.797583  965188 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:25:19.800623  965188 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:19.801188  965188 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:25:19.801218  965188 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:19.801456  965188 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:25:19.801757  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:19.801792  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:19.817477  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45405
	I0314 18:25:19.817924  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:19.818438  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:19.818478  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:19.818794  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:19.819018  965188 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:25:19.819235  965188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:19.819263  965188 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:25:19.822126  965188 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:19.822600  965188 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:25:19.822628  965188 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:19.822802  965188 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:25:19.823011  965188 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:25:19.823197  965188 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:25:19.823363  965188 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	W0314 18:25:21.180552  965188 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:21.180612  965188 retry.go:31] will retry after 224.258702ms: dial tcp 192.168.39.245:22: connect: no route to host
	W0314 18:25:24.248581  965188 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0314 18:25:24.248709  965188 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0314 18:25:24.248729  965188 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:24.248736  965188 status.go:257] ha-105786-m02 status: &{Name:ha-105786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0314 18:25:24.248759  965188 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:24.248770  965188 status.go:255] checking status of ha-105786-m03 ...
	I0314 18:25:24.249070  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:24.249148  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:24.264879  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33707
	I0314 18:25:24.265311  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:24.265850  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:24.265875  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:24.266301  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:24.266535  965188 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:25:24.268190  965188 status.go:330] ha-105786-m03 host status = "Running" (err=<nil>)
	I0314 18:25:24.268227  965188 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:24.268512  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:24.268546  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:24.283014  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0314 18:25:24.283445  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:24.284012  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:24.284034  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:24.284366  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:24.284569  965188 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:25:24.287747  965188 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:24.288226  965188 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:24.288255  965188 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:24.288414  965188 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:24.288755  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:24.288797  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:24.304289  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38197
	I0314 18:25:24.304947  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:24.306280  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:24.306316  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:24.306682  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:24.306916  965188 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:25:24.307137  965188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:24.307159  965188 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:25:24.309746  965188 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:24.310295  965188 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:24.310326  965188 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:24.310488  965188 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:25:24.310710  965188 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:25:24.310896  965188 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:25:24.311107  965188 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:25:24.394521  965188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:24.413908  965188 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:24.413943  965188 api_server.go:166] Checking apiserver status ...
	I0314 18:25:24.413983  965188 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:24.430988  965188 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0314 18:25:24.445250  965188 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:24.445313  965188 ssh_runner.go:195] Run: ls
	I0314 18:25:24.450367  965188 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:24.456404  965188 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:24.456431  965188 status.go:422] ha-105786-m03 apiserver status = Running (err=<nil>)
	I0314 18:25:24.456443  965188 status.go:257] ha-105786-m03 status: &{Name:ha-105786-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:24.456463  965188 status.go:255] checking status of ha-105786-m04 ...
	I0314 18:25:24.456858  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:24.456908  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:24.472069  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41285
	I0314 18:25:24.472586  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:24.473122  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:24.473149  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:24.473546  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:24.473798  965188 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:25:24.475449  965188 status.go:330] ha-105786-m04 host status = "Running" (err=<nil>)
	I0314 18:25:24.475469  965188 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:24.475876  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:24.475920  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:24.490733  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42003
	I0314 18:25:24.491155  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:24.491572  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:24.491592  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:24.491913  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:24.492110  965188 main.go:141] libmachine: (ha-105786-m04) Calling .GetIP
	I0314 18:25:24.494848  965188 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:24.495351  965188 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:24.495379  965188 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:24.495513  965188 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:24.495908  965188 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:24.495951  965188 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:24.511097  965188 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45281
	I0314 18:25:24.511534  965188 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:24.511968  965188 main.go:141] libmachine: Using API Version  1
	I0314 18:25:24.511987  965188 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:24.512385  965188 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:24.512564  965188 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:25:24.512800  965188 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:24.512832  965188 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:25:24.515477  965188 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:24.515967  965188 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:24.516000  965188 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:24.516182  965188 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:25:24.516375  965188 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:25:24.516555  965188 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:25:24.516709  965188 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:25:24.605765  965188 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:24.623429  965188 status.go:257] ha-105786-m04 status: &{Name:ha-105786-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr: exit status 3 (4.139707418s)

                                                
                                                
-- stdout --
	ha-105786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-105786-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:25:26.905217  965283 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:25:26.905467  965283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:26.905476  965283 out.go:304] Setting ErrFile to fd 2...
	I0314 18:25:26.905481  965283 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:26.905658  965283 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:25:26.905850  965283 out.go:298] Setting JSON to false
	I0314 18:25:26.905881  965283 mustload.go:65] Loading cluster: ha-105786
	I0314 18:25:26.905997  965283 notify.go:220] Checking for updates...
	I0314 18:25:26.906228  965283 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:25:26.906242  965283 status.go:255] checking status of ha-105786 ...
	I0314 18:25:26.906580  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:26.906633  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:26.925404  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35705
	I0314 18:25:26.925906  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:26.926637  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:26.926669  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:26.926970  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:26.927167  965283 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:25:26.928791  965283 status.go:330] ha-105786 host status = "Running" (err=<nil>)
	I0314 18:25:26.928812  965283 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:26.929090  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:26.929158  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:26.944731  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38263
	I0314 18:25:26.945173  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:26.945647  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:26.945667  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:26.946066  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:26.946278  965283 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:25:26.949381  965283 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:26.949857  965283 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:26.949885  965283 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:26.950079  965283 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:26.950426  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:26.950478  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:26.966896  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37511
	I0314 18:25:26.967376  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:26.967901  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:26.967924  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:26.968279  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:26.968489  965283 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:25:26.968732  965283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:26.968755  965283 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:25:26.971298  965283 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:26.971738  965283 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:26.971762  965283 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:26.971889  965283 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:25:26.972048  965283 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:25:26.972196  965283 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:25:26.972326  965283 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:25:27.052564  965283 ssh_runner.go:195] Run: systemctl --version
	I0314 18:25:27.060173  965283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:27.077555  965283 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:27.077588  965283 api_server.go:166] Checking apiserver status ...
	I0314 18:25:27.077623  965283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:27.095701  965283 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0314 18:25:27.107290  965283 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:27.107345  965283 ssh_runner.go:195] Run: ls
	I0314 18:25:27.112774  965283 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:27.119158  965283 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:27.119180  965283 status.go:422] ha-105786 apiserver status = Running (err=<nil>)
	I0314 18:25:27.119190  965283 status.go:257] ha-105786 status: &{Name:ha-105786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:27.119222  965283 status.go:255] checking status of ha-105786-m02 ...
	I0314 18:25:27.119555  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:27.119593  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:27.135066  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40005
	I0314 18:25:27.135521  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:27.136073  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:27.136097  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:27.136449  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:27.136652  965283 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:25:27.138500  965283 status.go:330] ha-105786-m02 host status = "Running" (err=<nil>)
	I0314 18:25:27.138518  965283 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:25:27.138867  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:27.138914  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:27.156961  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33243
	I0314 18:25:27.157366  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:27.157880  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:27.157903  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:27.158233  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:27.158459  965283 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:25:27.161396  965283 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:27.161816  965283 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:25:27.161833  965283 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:27.161979  965283 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:25:27.162400  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:27.162446  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:27.178029  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0314 18:25:27.178483  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:27.178930  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:27.178952  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:27.179315  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:27.179567  965283 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:25:27.179793  965283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:27.179825  965283 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:25:27.182618  965283 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:27.183046  965283 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:25:27.183087  965283 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:27.183226  965283 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:25:27.183402  965283 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:25:27.183553  965283 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:25:27.183728  965283 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	W0314 18:25:27.320489  965283 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:27.320553  965283 retry.go:31] will retry after 243.358217ms: dial tcp 192.168.39.245:22: connect: no route to host
	W0314 18:25:30.616520  965283 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0314 18:25:30.616648  965283 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0314 18:25:30.616676  965283 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:30.616690  965283 status.go:257] ha-105786-m02 status: &{Name:ha-105786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0314 18:25:30.616712  965283 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:30.616720  965283 status.go:255] checking status of ha-105786-m03 ...
	I0314 18:25:30.617192  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:30.617275  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:30.632902  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38691
	I0314 18:25:30.633408  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:30.633915  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:30.633947  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:30.634278  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:30.634470  965283 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:25:30.636057  965283 status.go:330] ha-105786-m03 host status = "Running" (err=<nil>)
	I0314 18:25:30.636079  965283 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:30.636445  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:30.636485  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:30.650821  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39135
	I0314 18:25:30.651236  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:30.651692  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:30.651708  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:30.652061  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:30.652292  965283 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:25:30.655746  965283 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:30.656235  965283 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:30.656266  965283 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:30.656406  965283 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:30.656915  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:30.656991  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:30.671823  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37015
	I0314 18:25:30.672299  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:30.672811  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:30.672836  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:30.673212  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:30.673433  965283 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:25:30.673669  965283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:30.673696  965283 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:25:30.676564  965283 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:30.677039  965283 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:30.677066  965283 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:30.677221  965283 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:25:30.677412  965283 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:25:30.677629  965283 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:25:30.677773  965283 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:25:30.762877  965283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:30.781136  965283 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:30.781164  965283 api_server.go:166] Checking apiserver status ...
	I0314 18:25:30.781202  965283 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:30.800609  965283 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0314 18:25:30.811465  965283 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:30.811509  965283 ssh_runner.go:195] Run: ls
	I0314 18:25:30.816684  965283 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:30.821827  965283 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:30.821851  965283 status.go:422] ha-105786-m03 apiserver status = Running (err=<nil>)
	I0314 18:25:30.821861  965283 status.go:257] ha-105786-m03 status: &{Name:ha-105786-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:30.821877  965283 status.go:255] checking status of ha-105786-m04 ...
	I0314 18:25:30.822159  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:30.822197  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:30.837494  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33259
	I0314 18:25:30.837912  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:30.838444  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:30.838470  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:30.838840  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:30.839084  965283 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:25:30.840906  965283 status.go:330] ha-105786-m04 host status = "Running" (err=<nil>)
	I0314 18:25:30.840924  965283 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:30.841210  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:30.841245  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:30.855454  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43331
	I0314 18:25:30.855848  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:30.856260  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:30.856279  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:30.856564  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:30.856790  965283 main.go:141] libmachine: (ha-105786-m04) Calling .GetIP
	I0314 18:25:30.859617  965283 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:30.860099  965283 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:30.860120  965283 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:30.860293  965283 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:30.860647  965283 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:30.860687  965283 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:30.875173  965283 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43319
	I0314 18:25:30.875560  965283 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:30.876085  965283 main.go:141] libmachine: Using API Version  1
	I0314 18:25:30.876106  965283 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:30.876455  965283 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:30.876656  965283 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:25:30.876849  965283 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:30.876890  965283 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:25:30.879299  965283 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:30.879838  965283 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:30.879865  965283 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:30.880038  965283 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:25:30.880221  965283 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:25:30.880390  965283 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:25:30.880543  965283 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:25:30.968959  965283 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:30.985295  965283 status.go:257] ha-105786-m04 status: &{Name:ha-105786-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr: exit status 3 (4.41768315s)

                                                
                                                
-- stdout --
	ha-105786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-105786-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:25:32.921105  965389 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:25:32.921370  965389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:32.921380  965389 out.go:304] Setting ErrFile to fd 2...
	I0314 18:25:32.921386  965389 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:32.921640  965389 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:25:32.921889  965389 out.go:298] Setting JSON to false
	I0314 18:25:32.921928  965389 mustload.go:65] Loading cluster: ha-105786
	I0314 18:25:32.921963  965389 notify.go:220] Checking for updates...
	I0314 18:25:32.922357  965389 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:25:32.922376  965389 status.go:255] checking status of ha-105786 ...
	I0314 18:25:32.922774  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:32.922832  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:32.942552  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38803
	I0314 18:25:32.942987  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:32.943597  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:32.943630  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:32.943948  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:32.944194  965389 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:25:32.945955  965389 status.go:330] ha-105786 host status = "Running" (err=<nil>)
	I0314 18:25:32.945973  965389 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:32.946259  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:32.946301  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:32.962015  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36825
	I0314 18:25:32.962482  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:32.962937  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:32.962973  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:32.963274  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:32.963470  965389 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:25:32.966438  965389 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:32.966909  965389 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:32.966940  965389 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:32.967090  965389 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:32.967431  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:32.967478  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:32.982810  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42211
	I0314 18:25:32.983205  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:32.983575  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:32.983593  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:32.983917  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:32.984140  965389 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:25:32.984358  965389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:32.984383  965389 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:25:32.987255  965389 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:32.987698  965389 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:32.987745  965389 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:32.987977  965389 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:25:32.988157  965389 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:25:32.988320  965389 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:25:32.988447  965389 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:25:33.070339  965389 ssh_runner.go:195] Run: systemctl --version
	I0314 18:25:33.077610  965389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:33.095084  965389 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:33.095123  965389 api_server.go:166] Checking apiserver status ...
	I0314 18:25:33.095183  965389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:33.111108  965389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0314 18:25:33.122545  965389 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:33.122610  965389 ssh_runner.go:195] Run: ls
	I0314 18:25:33.128881  965389 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:33.136145  965389 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:33.136166  965389 status.go:422] ha-105786 apiserver status = Running (err=<nil>)
	I0314 18:25:33.136177  965389 status.go:257] ha-105786 status: &{Name:ha-105786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:33.136192  965389 status.go:255] checking status of ha-105786-m02 ...
	I0314 18:25:33.136504  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:33.136546  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:33.151714  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38313
	I0314 18:25:33.152197  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:33.152751  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:33.152774  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:33.153100  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:33.153261  965389 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:25:33.154899  965389 status.go:330] ha-105786-m02 host status = "Running" (err=<nil>)
	I0314 18:25:33.154917  965389 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:25:33.155231  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:33.155267  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:33.170199  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36873
	I0314 18:25:33.170582  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:33.171073  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:33.171094  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:33.171499  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:33.171715  965389 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:25:33.174597  965389 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:33.175008  965389 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:25:33.175038  965389 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:33.175172  965389 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:25:33.175473  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:33.175506  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:33.190018  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41601
	I0314 18:25:33.190409  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:33.190956  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:33.190984  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:33.191300  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:33.191497  965389 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:25:33.191703  965389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:33.191731  965389 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:25:33.194121  965389 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:33.194515  965389 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:25:33.194536  965389 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:33.194642  965389 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:25:33.194806  965389 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:25:33.194926  965389 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:25:33.195074  965389 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	W0314 18:25:33.688424  965389 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:33.688484  965389 retry.go:31] will retry after 171.898569ms: dial tcp 192.168.39.245:22: connect: no route to host
	W0314 18:25:36.920447  965389 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0314 18:25:36.920565  965389 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0314 18:25:36.920584  965389 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:36.920591  965389 status.go:257] ha-105786-m02 status: &{Name:ha-105786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0314 18:25:36.920614  965389 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:36.920624  965389 status.go:255] checking status of ha-105786-m03 ...
	I0314 18:25:36.920947  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:36.921000  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:36.937172  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33605
	I0314 18:25:36.937679  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:36.938180  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:36.938204  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:36.938590  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:36.938773  965389 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:25:36.940155  965389 status.go:330] ha-105786-m03 host status = "Running" (err=<nil>)
	I0314 18:25:36.940172  965389 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:36.940483  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:36.940526  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:36.955433  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34367
	I0314 18:25:36.955886  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:36.956338  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:36.956361  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:36.956662  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:36.956835  965389 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:25:36.959812  965389 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:36.960278  965389 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:36.960312  965389 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:36.960440  965389 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:36.960745  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:36.960802  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:36.975139  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35715
	I0314 18:25:36.975667  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:36.976131  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:36.976152  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:36.976479  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:36.976741  965389 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:25:36.976973  965389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:36.976994  965389 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:25:36.979955  965389 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:36.980413  965389 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:36.980435  965389 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:36.980640  965389 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:25:36.980814  965389 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:25:36.980975  965389 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:25:36.981109  965389 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:25:37.060982  965389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:37.076135  965389 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:37.076168  965389 api_server.go:166] Checking apiserver status ...
	I0314 18:25:37.076233  965389 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:37.091042  965389 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0314 18:25:37.101199  965389 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:37.101264  965389 ssh_runner.go:195] Run: ls
	I0314 18:25:37.106503  965389 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:37.114350  965389 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:37.114372  965389 status.go:422] ha-105786-m03 apiserver status = Running (err=<nil>)
	I0314 18:25:37.114382  965389 status.go:257] ha-105786-m03 status: &{Name:ha-105786-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:37.114397  965389 status.go:255] checking status of ha-105786-m04 ...
	I0314 18:25:37.114713  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:37.114745  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:37.129852  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42621
	I0314 18:25:37.130277  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:37.130870  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:37.130893  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:37.131242  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:37.131434  965389 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:25:37.133100  965389 status.go:330] ha-105786-m04 host status = "Running" (err=<nil>)
	I0314 18:25:37.133121  965389 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:37.133523  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:37.133567  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:37.149167  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36417
	I0314 18:25:37.149630  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:37.150124  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:37.150145  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:37.150488  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:37.150682  965389 main.go:141] libmachine: (ha-105786-m04) Calling .GetIP
	I0314 18:25:37.153538  965389 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:37.153976  965389 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:37.154002  965389 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:37.154137  965389 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:37.154473  965389 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:37.154533  965389 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:37.169499  965389 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41347
	I0314 18:25:37.169865  965389 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:37.170355  965389 main.go:141] libmachine: Using API Version  1
	I0314 18:25:37.170383  965389 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:37.170761  965389 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:37.170952  965389 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:25:37.171125  965389 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:37.171149  965389 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:25:37.173729  965389 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:37.174169  965389 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:37.174203  965389 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:37.174343  965389 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:25:37.174518  965389 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:25:37.174685  965389 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:25:37.174773  965389 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:25:37.260702  965389 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:37.276346  965389 status.go:257] ha-105786-m04 status: &{Name:ha-105786-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr: exit status 3 (3.762357843s)

                                                
                                                
-- stdout --
	ha-105786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m02
	type: Control Plane
	host: Error
	kubelet: Nonexistent
	apiserver: Nonexistent
	kubeconfig: Configured
	
	ha-105786-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:25:39.928511  965483 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:25:39.928717  965483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:39.928727  965483 out.go:304] Setting ErrFile to fd 2...
	I0314 18:25:39.928731  965483 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:39.928995  965483 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:25:39.929174  965483 out.go:298] Setting JSON to false
	I0314 18:25:39.929214  965483 mustload.go:65] Loading cluster: ha-105786
	I0314 18:25:39.929326  965483 notify.go:220] Checking for updates...
	I0314 18:25:39.929565  965483 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:25:39.929580  965483 status.go:255] checking status of ha-105786 ...
	I0314 18:25:39.929938  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:39.930002  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:39.950853  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39339
	I0314 18:25:39.951319  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:39.951902  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:39.951926  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:39.952334  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:39.952563  965483 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:25:39.954249  965483 status.go:330] ha-105786 host status = "Running" (err=<nil>)
	I0314 18:25:39.954266  965483 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:39.954545  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:39.954581  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:39.969375  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43069
	I0314 18:25:39.969766  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:39.970190  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:39.970218  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:39.970551  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:39.970745  965483 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:25:39.973225  965483 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:39.973625  965483 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:39.973652  965483 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:39.973784  965483 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:39.974081  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:39.974139  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:39.988411  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35071
	I0314 18:25:39.988730  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:39.989197  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:39.989218  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:39.989531  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:39.989699  965483 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:25:39.989870  965483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:39.989920  965483 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:25:39.992817  965483 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:39.993260  965483 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:39.993295  965483 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:39.993416  965483 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:25:39.993580  965483 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:25:39.993725  965483 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:25:39.993842  965483 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:25:40.076707  965483 ssh_runner.go:195] Run: systemctl --version
	I0314 18:25:40.082816  965483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:40.098947  965483 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:40.098978  965483 api_server.go:166] Checking apiserver status ...
	I0314 18:25:40.099016  965483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:40.113835  965483 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0314 18:25:40.126303  965483 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:40.126367  965483 ssh_runner.go:195] Run: ls
	I0314 18:25:40.131516  965483 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:40.136581  965483 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:40.136607  965483 status.go:422] ha-105786 apiserver status = Running (err=<nil>)
	I0314 18:25:40.136621  965483 status.go:257] ha-105786 status: &{Name:ha-105786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:40.136660  965483 status.go:255] checking status of ha-105786-m02 ...
	I0314 18:25:40.137084  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:40.137124  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:40.152799  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45097
	I0314 18:25:40.153281  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:40.153844  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:40.153874  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:40.154257  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:40.154485  965483 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:25:40.156090  965483 status.go:330] ha-105786-m02 host status = "Running" (err=<nil>)
	I0314 18:25:40.156111  965483 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:25:40.156410  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:40.156445  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:40.171882  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39239
	I0314 18:25:40.172468  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:40.173001  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:40.173018  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:40.173406  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:40.173607  965483 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:25:40.176613  965483 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:40.177023  965483 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:25:40.177042  965483 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:40.177206  965483 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:25:40.177484  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:40.177523  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:40.191925  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46189
	I0314 18:25:40.192381  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:40.192804  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:40.192824  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:40.193127  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:40.193314  965483 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:25:40.193496  965483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:40.193516  965483 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:25:40.195996  965483 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:40.196358  965483 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:25:40.196387  965483 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:25:40.196509  965483 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:25:40.196691  965483 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:25:40.196846  965483 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:25:40.197013  965483 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	W0314 18:25:43.256562  965483 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.245:22: connect: no route to host
	W0314 18:25:43.256681  965483 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	E0314 18:25:43.256709  965483 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:43.256723  965483 status.go:257] ha-105786-m02 status: &{Name:ha-105786-m02 Host:Error Kubelet:Nonexistent APIServer:Nonexistent Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	E0314 18:25:43.256749  965483 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.245:22: connect: no route to host
	I0314 18:25:43.256766  965483 status.go:255] checking status of ha-105786-m03 ...
	I0314 18:25:43.257209  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:43.257310  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:43.274045  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38283
	I0314 18:25:43.274558  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:43.275158  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:43.275184  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:43.275542  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:43.275774  965483 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:25:43.277348  965483 status.go:330] ha-105786-m03 host status = "Running" (err=<nil>)
	I0314 18:25:43.277365  965483 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:43.277680  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:43.277725  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:43.293652  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33283
	I0314 18:25:43.294146  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:43.294706  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:43.294748  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:43.295114  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:43.295341  965483 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:25:43.298468  965483 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:43.298959  965483 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:43.298993  965483 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:43.299124  965483 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:43.299450  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:43.299505  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:43.314552  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34513
	I0314 18:25:43.315039  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:43.315555  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:43.315577  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:43.316027  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:43.316274  965483 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:25:43.316524  965483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:43.316557  965483 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:25:43.319927  965483 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:43.320585  965483 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:43.320623  965483 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:43.320875  965483 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:25:43.321065  965483 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:25:43.321213  965483 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:25:43.321349  965483 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:25:43.408178  965483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:43.423586  965483 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:43.423619  965483 api_server.go:166] Checking apiserver status ...
	I0314 18:25:43.423655  965483 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:43.440233  965483 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0314 18:25:43.451261  965483 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:43.451310  965483 ssh_runner.go:195] Run: ls
	I0314 18:25:43.457459  965483 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:43.467012  965483 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:43.467127  965483 status.go:422] ha-105786-m03 apiserver status = Running (err=<nil>)
	I0314 18:25:43.467146  965483 status.go:257] ha-105786-m03 status: &{Name:ha-105786-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:43.467172  965483 status.go:255] checking status of ha-105786-m04 ...
	I0314 18:25:43.467552  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:43.467605  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:43.483695  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44475
	I0314 18:25:43.484133  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:43.484626  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:43.484651  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:43.485001  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:43.485240  965483 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:25:43.486971  965483 status.go:330] ha-105786-m04 host status = "Running" (err=<nil>)
	I0314 18:25:43.486996  965483 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:43.487285  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:43.487328  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:43.501722  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45865
	I0314 18:25:43.502110  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:43.502509  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:43.502537  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:43.502970  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:43.503192  965483 main.go:141] libmachine: (ha-105786-m04) Calling .GetIP
	I0314 18:25:43.506352  965483 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:43.506870  965483 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:43.506910  965483 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:43.507009  965483 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:43.507460  965483 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:43.507504  965483 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:43.522072  965483 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38765
	I0314 18:25:43.522453  965483 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:43.522909  965483 main.go:141] libmachine: Using API Version  1
	I0314 18:25:43.522929  965483 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:43.523274  965483 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:43.523462  965483 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:25:43.523660  965483 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:43.523688  965483 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:25:43.526210  965483 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:43.526678  965483 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:43.526705  965483 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:43.526855  965483 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:25:43.527048  965483 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:25:43.527229  965483 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:25:43.527381  965483 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:25:43.611930  965483 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:43.629133  965483 status.go:257] ha-105786-m04 status: &{Name:ha-105786-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr: exit status 7 (660.176764ms)

                                                
                                                
-- stdout --
	ha-105786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-105786-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:25:48.040065  965611 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:25:48.040592  965611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:48.040611  965611 out.go:304] Setting ErrFile to fd 2...
	I0314 18:25:48.040618  965611 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:48.041047  965611 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:25:48.041390  965611 out.go:298] Setting JSON to false
	I0314 18:25:48.041444  965611 mustload.go:65] Loading cluster: ha-105786
	I0314 18:25:48.041556  965611 notify.go:220] Checking for updates...
	I0314 18:25:48.042325  965611 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:25:48.042351  965611 status.go:255] checking status of ha-105786 ...
	I0314 18:25:48.042774  965611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:48.042829  965611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:48.060935  965611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45377
	I0314 18:25:48.061378  965611 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:48.061980  965611 main.go:141] libmachine: Using API Version  1
	I0314 18:25:48.062010  965611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:48.062420  965611 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:48.062622  965611 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:25:48.064536  965611 status.go:330] ha-105786 host status = "Running" (err=<nil>)
	I0314 18:25:48.064559  965611 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:48.064844  965611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:48.064881  965611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:48.079193  965611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38325
	I0314 18:25:48.079578  965611 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:48.079958  965611 main.go:141] libmachine: Using API Version  1
	I0314 18:25:48.079983  965611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:48.080340  965611 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:48.080568  965611 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:25:48.083948  965611 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:48.084391  965611 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:48.084421  965611 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:48.084577  965611 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:48.084856  965611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:48.084894  965611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:48.099646  965611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44833
	I0314 18:25:48.100025  965611 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:48.100547  965611 main.go:141] libmachine: Using API Version  1
	I0314 18:25:48.100575  965611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:48.100895  965611 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:48.101111  965611 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:25:48.101324  965611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:48.101352  965611 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:25:48.104351  965611 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:48.104774  965611 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:48.104800  965611 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:48.104944  965611 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:25:48.105097  965611 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:25:48.105243  965611 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:25:48.105421  965611 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:25:48.184621  965611 ssh_runner.go:195] Run: systemctl --version
	I0314 18:25:48.190871  965611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:48.218054  965611 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:48.218085  965611 api_server.go:166] Checking apiserver status ...
	I0314 18:25:48.218124  965611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:48.234729  965611 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0314 18:25:48.247534  965611 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:48.247578  965611 ssh_runner.go:195] Run: ls
	I0314 18:25:48.253185  965611 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:48.259861  965611 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:48.259890  965611 status.go:422] ha-105786 apiserver status = Running (err=<nil>)
	I0314 18:25:48.259905  965611 status.go:257] ha-105786 status: &{Name:ha-105786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:48.259935  965611 status.go:255] checking status of ha-105786-m02 ...
	I0314 18:25:48.260267  965611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:48.260334  965611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:48.277865  965611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45493
	I0314 18:25:48.278360  965611 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:48.278874  965611 main.go:141] libmachine: Using API Version  1
	I0314 18:25:48.278900  965611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:48.279234  965611 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:48.279408  965611 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:25:48.280994  965611 status.go:330] ha-105786-m02 host status = "Stopped" (err=<nil>)
	I0314 18:25:48.281010  965611 status.go:343] host is not running, skipping remaining checks
	I0314 18:25:48.281015  965611 status.go:257] ha-105786-m02 status: &{Name:ha-105786-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:48.281030  965611 status.go:255] checking status of ha-105786-m03 ...
	I0314 18:25:48.281356  965611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:48.281406  965611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:48.296150  965611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39477
	I0314 18:25:48.296592  965611 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:48.296988  965611 main.go:141] libmachine: Using API Version  1
	I0314 18:25:48.297007  965611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:48.297308  965611 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:48.297528  965611 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:25:48.299038  965611 status.go:330] ha-105786-m03 host status = "Running" (err=<nil>)
	I0314 18:25:48.299053  965611 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:48.299315  965611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:48.299347  965611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:48.314039  965611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35921
	I0314 18:25:48.314379  965611 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:48.314804  965611 main.go:141] libmachine: Using API Version  1
	I0314 18:25:48.314825  965611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:48.315119  965611 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:48.315302  965611 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:25:48.317899  965611 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:48.318309  965611 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:48.318339  965611 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:48.318470  965611 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:48.318865  965611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:48.318917  965611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:48.333063  965611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41137
	I0314 18:25:48.333494  965611 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:48.333972  965611 main.go:141] libmachine: Using API Version  1
	I0314 18:25:48.333987  965611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:48.334348  965611 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:48.334544  965611 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:25:48.334772  965611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:48.334794  965611 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:25:48.337530  965611 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:48.338002  965611 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:48.338044  965611 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:48.338201  965611 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:25:48.338398  965611 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:25:48.338570  965611 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:25:48.338702  965611 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:25:48.416685  965611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:48.434072  965611 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:48.434112  965611 api_server.go:166] Checking apiserver status ...
	I0314 18:25:48.434157  965611 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:48.453273  965611 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0314 18:25:48.465619  965611 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:48.465679  965611 ssh_runner.go:195] Run: ls
	I0314 18:25:48.471449  965611 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:48.476365  965611 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:48.476388  965611 status.go:422] ha-105786-m03 apiserver status = Running (err=<nil>)
	I0314 18:25:48.476398  965611 status.go:257] ha-105786-m03 status: &{Name:ha-105786-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:48.476414  965611 status.go:255] checking status of ha-105786-m04 ...
	I0314 18:25:48.476699  965611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:48.476731  965611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:48.492631  965611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42959
	I0314 18:25:48.493089  965611 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:48.493635  965611 main.go:141] libmachine: Using API Version  1
	I0314 18:25:48.493657  965611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:48.493994  965611 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:48.494254  965611 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:25:48.496095  965611 status.go:330] ha-105786-m04 host status = "Running" (err=<nil>)
	I0314 18:25:48.496112  965611 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:48.496447  965611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:48.496484  965611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:48.511984  965611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I0314 18:25:48.512418  965611 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:48.512877  965611 main.go:141] libmachine: Using API Version  1
	I0314 18:25:48.512901  965611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:48.513208  965611 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:48.513396  965611 main.go:141] libmachine: (ha-105786-m04) Calling .GetIP
	I0314 18:25:48.516129  965611 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:48.516597  965611 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:48.516619  965611 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:48.516775  965611 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:48.517091  965611 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:48.517129  965611 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:48.533693  965611 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37971
	I0314 18:25:48.534127  965611 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:48.534640  965611 main.go:141] libmachine: Using API Version  1
	I0314 18:25:48.534664  965611 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:48.534996  965611 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:48.535231  965611 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:25:48.535463  965611 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:48.535487  965611 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:25:48.538200  965611 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:48.538591  965611 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:48.538623  965611 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:48.538747  965611 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:25:48.538933  965611 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:25:48.539127  965611 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:25:48.539256  965611 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:25:48.623762  965611 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:48.639721  965611 status.go:257] ha-105786-m04 status: &{Name:ha-105786-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr: exit status 7 (677.644931ms)

                                                
                                                
-- stdout --
	ha-105786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-105786-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:25:59.174414  965705 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:25:59.174539  965705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:59.174547  965705 out.go:304] Setting ErrFile to fd 2...
	I0314 18:25:59.174551  965705 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:25:59.174744  965705 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:25:59.174926  965705 out.go:298] Setting JSON to false
	I0314 18:25:59.174955  965705 mustload.go:65] Loading cluster: ha-105786
	I0314 18:25:59.175088  965705 notify.go:220] Checking for updates...
	I0314 18:25:59.175443  965705 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:25:59.175469  965705 status.go:255] checking status of ha-105786 ...
	I0314 18:25:59.175865  965705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:59.175932  965705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:59.196735  965705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42073
	I0314 18:25:59.197228  965705 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:59.197862  965705 main.go:141] libmachine: Using API Version  1
	I0314 18:25:59.197893  965705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:59.198318  965705 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:59.198567  965705 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:25:59.200232  965705 status.go:330] ha-105786 host status = "Running" (err=<nil>)
	I0314 18:25:59.200252  965705 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:59.200565  965705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:59.200612  965705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:59.215823  965705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43491
	I0314 18:25:59.216298  965705 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:59.216878  965705 main.go:141] libmachine: Using API Version  1
	I0314 18:25:59.216910  965705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:59.217314  965705 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:59.217550  965705 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:25:59.220622  965705 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:59.221110  965705 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:59.221140  965705 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:59.221265  965705 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:25:59.221668  965705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:59.221718  965705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:59.236845  965705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33749
	I0314 18:25:59.237297  965705 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:59.237762  965705 main.go:141] libmachine: Using API Version  1
	I0314 18:25:59.237782  965705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:59.238087  965705 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:59.238283  965705 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:25:59.238465  965705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:59.238494  965705 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:25:59.241259  965705 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:59.241763  965705 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:25:59.241789  965705 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:25:59.241914  965705 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:25:59.242117  965705 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:25:59.242290  965705 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:25:59.242413  965705 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:25:59.327471  965705 ssh_runner.go:195] Run: systemctl --version
	I0314 18:25:59.335662  965705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:59.354949  965705 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:59.354978  965705 api_server.go:166] Checking apiserver status ...
	I0314 18:25:59.355008  965705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:59.373464  965705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0314 18:25:59.386337  965705 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:59.386388  965705 ssh_runner.go:195] Run: ls
	I0314 18:25:59.391473  965705 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:59.396832  965705 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:59.396852  965705 status.go:422] ha-105786 apiserver status = Running (err=<nil>)
	I0314 18:25:59.396862  965705 status.go:257] ha-105786 status: &{Name:ha-105786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:59.396906  965705 status.go:255] checking status of ha-105786-m02 ...
	I0314 18:25:59.397291  965705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:59.397336  965705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:59.412821  965705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43455
	I0314 18:25:59.413237  965705 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:59.413798  965705 main.go:141] libmachine: Using API Version  1
	I0314 18:25:59.413829  965705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:59.414184  965705 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:59.414398  965705 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:25:59.416081  965705 status.go:330] ha-105786-m02 host status = "Stopped" (err=<nil>)
	I0314 18:25:59.416099  965705 status.go:343] host is not running, skipping remaining checks
	I0314 18:25:59.416107  965705 status.go:257] ha-105786-m02 status: &{Name:ha-105786-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:59.416137  965705 status.go:255] checking status of ha-105786-m03 ...
	I0314 18:25:59.416472  965705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:59.416511  965705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:59.432170  965705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43673
	I0314 18:25:59.432655  965705 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:59.433140  965705 main.go:141] libmachine: Using API Version  1
	I0314 18:25:59.433166  965705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:59.433458  965705 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:59.433682  965705 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:25:59.435325  965705 status.go:330] ha-105786-m03 host status = "Running" (err=<nil>)
	I0314 18:25:59.435344  965705 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:59.435722  965705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:59.435766  965705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:59.450526  965705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32819
	I0314 18:25:59.451042  965705 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:59.451517  965705 main.go:141] libmachine: Using API Version  1
	I0314 18:25:59.451540  965705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:59.451867  965705 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:59.452047  965705 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:25:59.454831  965705 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:59.455251  965705 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:59.455274  965705 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:59.455362  965705 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:25:59.455653  965705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:59.455692  965705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:59.469641  965705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41917
	I0314 18:25:59.470032  965705 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:59.470857  965705 main.go:141] libmachine: Using API Version  1
	I0314 18:25:59.470901  965705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:59.471263  965705 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:59.471572  965705 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:25:59.471926  965705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:59.472010  965705 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:25:59.475154  965705 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:59.475576  965705 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:25:59.475605  965705 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:25:59.475849  965705 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:25:59.476028  965705 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:25:59.476174  965705 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:25:59.476315  965705 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:25:59.558000  965705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:59.575762  965705 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:25:59.575792  965705 api_server.go:166] Checking apiserver status ...
	I0314 18:25:59.575835  965705 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:25:59.592051  965705 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0314 18:25:59.604873  965705 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:25:59.604925  965705 ssh_runner.go:195] Run: ls
	I0314 18:25:59.614912  965705 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:25:59.620999  965705 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:25:59.621025  965705 status.go:422] ha-105786-m03 apiserver status = Running (err=<nil>)
	I0314 18:25:59.621035  965705 status.go:257] ha-105786-m03 status: &{Name:ha-105786-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:25:59.621051  965705 status.go:255] checking status of ha-105786-m04 ...
	I0314 18:25:59.621346  965705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:59.621389  965705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:59.638273  965705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40787
	I0314 18:25:59.638716  965705 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:59.639207  965705 main.go:141] libmachine: Using API Version  1
	I0314 18:25:59.639232  965705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:59.639567  965705 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:59.639751  965705 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:25:59.641453  965705 status.go:330] ha-105786-m04 host status = "Running" (err=<nil>)
	I0314 18:25:59.641473  965705 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:59.641784  965705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:59.641829  965705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:59.657331  965705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38555
	I0314 18:25:59.657791  965705 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:59.658281  965705 main.go:141] libmachine: Using API Version  1
	I0314 18:25:59.658306  965705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:59.658565  965705 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:59.658773  965705 main.go:141] libmachine: (ha-105786-m04) Calling .GetIP
	I0314 18:25:59.661580  965705 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:59.661930  965705 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:59.661958  965705 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:59.662075  965705 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:25:59.662388  965705 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:25:59.662422  965705 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:25:59.677903  965705 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38559
	I0314 18:25:59.678261  965705 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:25:59.678691  965705 main.go:141] libmachine: Using API Version  1
	I0314 18:25:59.678710  965705 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:25:59.679035  965705 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:25:59.679253  965705 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:25:59.679450  965705 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:25:59.679475  965705 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:25:59.681995  965705 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:59.682397  965705 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:25:59.682435  965705 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:25:59.682586  965705 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:25:59.682783  965705 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:25:59.682972  965705 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:25:59.683096  965705 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:25:59.772693  965705 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:25:59.790497  965705 status.go:257] ha-105786-m04 status: &{Name:ha-105786-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:428: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
ha_test.go:428: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr: exit status 7 (662.496889ms)

                                                
                                                
-- stdout --
	ha-105786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m02
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	
	ha-105786-m03
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:26:07.799740  965801 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:26:07.799857  965801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:26:07.799864  965801 out.go:304] Setting ErrFile to fd 2...
	I0314 18:26:07.799868  965801 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:26:07.800103  965801 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:26:07.800330  965801 out.go:298] Setting JSON to false
	I0314 18:26:07.800365  965801 mustload.go:65] Loading cluster: ha-105786
	I0314 18:26:07.800491  965801 notify.go:220] Checking for updates...
	I0314 18:26:07.800878  965801 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:26:07.800904  965801 status.go:255] checking status of ha-105786 ...
	I0314 18:26:07.801410  965801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:07.801470  965801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:07.821985  965801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41055
	I0314 18:26:07.822447  965801 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:07.823008  965801 main.go:141] libmachine: Using API Version  1
	I0314 18:26:07.823027  965801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:07.823439  965801 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:07.823642  965801 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:26:07.825400  965801 status.go:330] ha-105786 host status = "Running" (err=<nil>)
	I0314 18:26:07.825421  965801 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:26:07.825873  965801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:07.825932  965801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:07.840450  965801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33771
	I0314 18:26:07.840845  965801 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:07.841309  965801 main.go:141] libmachine: Using API Version  1
	I0314 18:26:07.841345  965801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:07.841665  965801 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:07.841854  965801 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:26:07.844520  965801 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:26:07.845023  965801 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:26:07.845055  965801 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:26:07.845218  965801 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:26:07.845506  965801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:07.845540  965801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:07.860012  965801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36445
	I0314 18:26:07.860418  965801 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:07.860854  965801 main.go:141] libmachine: Using API Version  1
	I0314 18:26:07.860877  965801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:07.861228  965801 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:07.861434  965801 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:26:07.861621  965801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:26:07.861655  965801 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:26:07.864555  965801 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:26:07.865080  965801 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:26:07.865104  965801 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:26:07.865248  965801 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:26:07.865422  965801 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:26:07.865627  965801 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:26:07.865755  965801 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:26:07.946000  965801 ssh_runner.go:195] Run: systemctl --version
	I0314 18:26:07.954004  965801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:26:07.970344  965801 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:26:07.970379  965801 api_server.go:166] Checking apiserver status ...
	I0314 18:26:07.970421  965801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:26:07.986110  965801 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup
	W0314 18:26:07.997478  965801 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1192/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:26:07.997534  965801 ssh_runner.go:195] Run: ls
	I0314 18:26:08.003674  965801 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:26:08.008878  965801 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:26:08.008905  965801 status.go:422] ha-105786 apiserver status = Running (err=<nil>)
	I0314 18:26:08.008918  965801 status.go:257] ha-105786 status: &{Name:ha-105786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:26:08.008968  965801 status.go:255] checking status of ha-105786-m02 ...
	I0314 18:26:08.009286  965801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:08.009322  965801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:08.024370  965801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43009
	I0314 18:26:08.024860  965801 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:08.025451  965801 main.go:141] libmachine: Using API Version  1
	I0314 18:26:08.025482  965801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:08.025863  965801 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:08.026115  965801 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:26:08.027820  965801 status.go:330] ha-105786-m02 host status = "Stopped" (err=<nil>)
	I0314 18:26:08.027848  965801 status.go:343] host is not running, skipping remaining checks
	I0314 18:26:08.027854  965801 status.go:257] ha-105786-m02 status: &{Name:ha-105786-m02 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:26:08.027869  965801 status.go:255] checking status of ha-105786-m03 ...
	I0314 18:26:08.028138  965801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:08.028172  965801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:08.043112  965801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39127
	I0314 18:26:08.043487  965801 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:08.043908  965801 main.go:141] libmachine: Using API Version  1
	I0314 18:26:08.043934  965801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:08.044335  965801 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:08.044538  965801 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:26:08.046009  965801 status.go:330] ha-105786-m03 host status = "Running" (err=<nil>)
	I0314 18:26:08.046032  965801 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:26:08.046454  965801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:08.046501  965801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:08.062094  965801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42367
	I0314 18:26:08.062447  965801 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:08.062907  965801 main.go:141] libmachine: Using API Version  1
	I0314 18:26:08.062927  965801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:08.063363  965801 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:08.063615  965801 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:26:08.067457  965801 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:26:08.067940  965801 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:26:08.067970  965801 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:26:08.068190  965801 host.go:66] Checking if "ha-105786-m03" exists ...
	I0314 18:26:08.068649  965801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:08.068699  965801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:08.084814  965801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36627
	I0314 18:26:08.085218  965801 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:08.085686  965801 main.go:141] libmachine: Using API Version  1
	I0314 18:26:08.085712  965801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:08.086018  965801 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:08.086209  965801 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:26:08.086392  965801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:26:08.086418  965801 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:26:08.089127  965801 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:26:08.089594  965801 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:26:08.089624  965801 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:26:08.089740  965801 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:26:08.089935  965801 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:26:08.090087  965801 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:26:08.090236  965801 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:26:08.168614  965801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:26:08.187696  965801 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:26:08.187729  965801 api_server.go:166] Checking apiserver status ...
	I0314 18:26:08.187773  965801 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:26:08.206543  965801 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup
	W0314 18:26:08.218833  965801 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1475/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:26:08.218892  965801 ssh_runner.go:195] Run: ls
	I0314 18:26:08.224231  965801 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:26:08.233487  965801 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:26:08.233510  965801 status.go:422] ha-105786-m03 apiserver status = Running (err=<nil>)
	I0314 18:26:08.233519  965801 status.go:257] ha-105786-m03 status: &{Name:ha-105786-m03 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:26:08.233536  965801 status.go:255] checking status of ha-105786-m04 ...
	I0314 18:26:08.233870  965801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:08.233912  965801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:08.251255  965801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44895
	I0314 18:26:08.251831  965801 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:08.252337  965801 main.go:141] libmachine: Using API Version  1
	I0314 18:26:08.252370  965801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:08.252701  965801 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:08.252955  965801 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:26:08.254780  965801 status.go:330] ha-105786-m04 host status = "Running" (err=<nil>)
	I0314 18:26:08.254798  965801 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:26:08.255107  965801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:08.255149  965801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:08.269897  965801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35707
	I0314 18:26:08.270265  965801 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:08.270724  965801 main.go:141] libmachine: Using API Version  1
	I0314 18:26:08.270750  965801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:08.271118  965801 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:08.271303  965801 main.go:141] libmachine: (ha-105786-m04) Calling .GetIP
	I0314 18:26:08.274074  965801 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:26:08.274522  965801 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:26:08.274551  965801 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:26:08.274654  965801 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:26:08.275053  965801 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:08.275095  965801 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:08.289614  965801 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37917
	I0314 18:26:08.290069  965801 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:08.290549  965801 main.go:141] libmachine: Using API Version  1
	I0314 18:26:08.290570  965801 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:08.290852  965801 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:08.291003  965801 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:26:08.291154  965801 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:26:08.291172  965801 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:26:08.293761  965801 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:26:08.294200  965801 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:26:08.294231  965801 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:26:08.294337  965801 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:26:08.294495  965801 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:26:08.294646  965801 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:26:08.294782  965801 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:26:08.380464  965801 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:26:08.399969  965801 status.go:257] ha-105786-m04 status: &{Name:ha-105786-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:432: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr" : exit status 7
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-105786 -n ha-105786
helpers_test.go:244: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/RestartSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-105786 logs -n 25: (1.571812928s)
helpers_test.go:252: TestMutliControlPlane/serial/RestartSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786:/home/docker/cp-test_ha-105786-m03_ha-105786.txt                       |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786 sudo cat                                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786.txt                                 |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m02:/home/docker/cp-test_ha-105786-m03_ha-105786-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m02 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04:/home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m04 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp testdata/cp-test.txt                                                | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile3116594682/001/cp-test_ha-105786-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786:/home/docker/cp-test_ha-105786-m04_ha-105786.txt                       |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786 sudo cat                                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786.txt                                 |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m02:/home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m02 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03:/home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m03 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-105786 node stop m02 -v=7                                                     | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-105786 node start m02 -v=7                                                    | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:18:02
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:18:02.895267  960722 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:18:02.895394  960722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:18:02.895404  960722 out.go:304] Setting ErrFile to fd 2...
	I0314 18:18:02.895408  960722 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:18:02.895618  960722 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:18:02.896256  960722 out.go:298] Setting JSON to false
	I0314 18:18:02.897280  960722 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":93635,"bootTime":1710346648,"procs":211,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:18:02.897349  960722 start.go:139] virtualization: kvm guest
	I0314 18:18:02.899596  960722 out.go:177] * [ha-105786] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:18:02.900900  960722 notify.go:220] Checking for updates...
	I0314 18:18:02.902213  960722 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:18:02.903507  960722 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:18:02.904780  960722 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:18:02.905989  960722 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:18:02.907362  960722 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:18:02.908640  960722 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:18:02.909996  960722 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:18:02.946213  960722 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 18:18:02.947582  960722 start.go:297] selected driver: kvm2
	I0314 18:18:02.947601  960722 start.go:901] validating driver "kvm2" against <nil>
	I0314 18:18:02.947612  960722 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:18:02.948348  960722 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:18:02.948426  960722 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 18:18:02.962979  960722 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 18:18:02.963024  960722 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 18:18:02.963232  960722 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:18:02.963264  960722 cni.go:84] Creating CNI manager for ""
	I0314 18:18:02.963271  960722 cni.go:136] multinode detected (0 nodes found), recommending kindnet
	I0314 18:18:02.963282  960722 start_flags.go:319] Found "CNI" CNI - setting NetworkPlugin=cni
	I0314 18:18:02.963335  960722 start.go:340] cluster config:
	{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRIS
ocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0
GPUs: AutoPauseInterval:1m0s}
	I0314 18:18:02.963430  960722 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:18:02.965090  960722 out.go:177] * Starting "ha-105786" primary control-plane node in "ha-105786" cluster
	I0314 18:18:02.966371  960722 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:18:02.966398  960722 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 18:18:02.966405  960722 cache.go:56] Caching tarball of preloaded images
	I0314 18:18:02.966471  960722 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:18:02.966481  960722 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:18:02.966775  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:18:02.966800  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json: {Name:mk72f73c0aa560b79de9e232e75bc80724a95ce3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:02.966930  960722 start.go:360] acquireMachinesLock for ha-105786: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:18:02.966958  960722 start.go:364] duration metric: took 15.33µs to acquireMachinesLock for "ha-105786"
	I0314 18:18:02.966974  960722 start.go:93] Provisioning new machine with config: &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:
9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:18:02.967026  960722 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 18:18:02.968645  960722 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:18:02.968768  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:18:02.968806  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:18:02.982864  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43579
	I0314 18:18:02.983324  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:18:02.983936  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:18:02.983955  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:18:02.984350  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:18:02.984629  960722 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:18:02.984761  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:02.984910  960722 start.go:159] libmachine.API.Create for "ha-105786" (driver="kvm2")
	I0314 18:18:02.984933  960722 client.go:168] LocalClient.Create starting
	I0314 18:18:02.984958  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 18:18:02.984991  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:18:02.985006  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:18:02.985059  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 18:18:02.985077  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:18:02.985087  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:18:02.985105  960722 main.go:141] libmachine: Running pre-create checks...
	I0314 18:18:02.985114  960722 main.go:141] libmachine: (ha-105786) Calling .PreCreateCheck
	I0314 18:18:02.985483  960722 main.go:141] libmachine: (ha-105786) Calling .GetConfigRaw
	I0314 18:18:02.985793  960722 main.go:141] libmachine: Creating machine...
	I0314 18:18:02.985807  960722 main.go:141] libmachine: (ha-105786) Calling .Create
	I0314 18:18:02.985947  960722 main.go:141] libmachine: (ha-105786) Creating KVM machine...
	I0314 18:18:02.987182  960722 main.go:141] libmachine: (ha-105786) DBG | found existing default KVM network
	I0314 18:18:02.987946  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:02.987802  960745 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000015320}
	I0314 18:18:02.987990  960722 main.go:141] libmachine: (ha-105786) DBG | created network xml: 
	I0314 18:18:02.988005  960722 main.go:141] libmachine: (ha-105786) DBG | <network>
	I0314 18:18:02.988018  960722 main.go:141] libmachine: (ha-105786) DBG |   <name>mk-ha-105786</name>
	I0314 18:18:02.988026  960722 main.go:141] libmachine: (ha-105786) DBG |   <dns enable='no'/>
	I0314 18:18:02.988035  960722 main.go:141] libmachine: (ha-105786) DBG |   
	I0314 18:18:02.988043  960722 main.go:141] libmachine: (ha-105786) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0314 18:18:02.988055  960722 main.go:141] libmachine: (ha-105786) DBG |     <dhcp>
	I0314 18:18:02.988062  960722 main.go:141] libmachine: (ha-105786) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0314 18:18:02.988067  960722 main.go:141] libmachine: (ha-105786) DBG |     </dhcp>
	I0314 18:18:02.988075  960722 main.go:141] libmachine: (ha-105786) DBG |   </ip>
	I0314 18:18:02.988079  960722 main.go:141] libmachine: (ha-105786) DBG |   
	I0314 18:18:02.988083  960722 main.go:141] libmachine: (ha-105786) DBG | </network>
	I0314 18:18:02.988091  960722 main.go:141] libmachine: (ha-105786) DBG | 
	I0314 18:18:02.993007  960722 main.go:141] libmachine: (ha-105786) DBG | trying to create private KVM network mk-ha-105786 192.168.39.0/24...
	I0314 18:18:03.062029  960722 main.go:141] libmachine: (ha-105786) DBG | private KVM network mk-ha-105786 192.168.39.0/24 created
	I0314 18:18:03.062064  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:03.062015  960745 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:18:03.062078  960722 main.go:141] libmachine: (ha-105786) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786 ...
	I0314 18:18:03.062114  960722 main.go:141] libmachine: (ha-105786) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 18:18:03.062221  960722 main.go:141] libmachine: (ha-105786) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:18:03.320056  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:03.319919  960745 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa...
	I0314 18:18:03.443945  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:03.443772  960745 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/ha-105786.rawdisk...
	I0314 18:18:03.443980  960722 main.go:141] libmachine: (ha-105786) DBG | Writing magic tar header
	I0314 18:18:03.443990  960722 main.go:141] libmachine: (ha-105786) DBG | Writing SSH key tar header
	I0314 18:18:03.443998  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:03.443904  960745 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786 ...
	I0314 18:18:03.444011  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786
	I0314 18:18:03.444072  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786 (perms=drwx------)
	I0314 18:18:03.444103  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 18:18:03.444115  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 18:18:03.444122  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:18:03.444138  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 18:18:03.444151  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 18:18:03.444166  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home/jenkins
	I0314 18:18:03.444178  960722 main.go:141] libmachine: (ha-105786) DBG | Checking permissions on dir: /home
	I0314 18:18:03.444188  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 18:18:03.444194  960722 main.go:141] libmachine: (ha-105786) DBG | Skipping /home - not owner
	I0314 18:18:03.444203  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 18:18:03.444231  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 18:18:03.444247  960722 main.go:141] libmachine: (ha-105786) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 18:18:03.444261  960722 main.go:141] libmachine: (ha-105786) Creating domain...
	I0314 18:18:03.445522  960722 main.go:141] libmachine: (ha-105786) define libvirt domain using xml: 
	I0314 18:18:03.445544  960722 main.go:141] libmachine: (ha-105786) <domain type='kvm'>
	I0314 18:18:03.445550  960722 main.go:141] libmachine: (ha-105786)   <name>ha-105786</name>
	I0314 18:18:03.445555  960722 main.go:141] libmachine: (ha-105786)   <memory unit='MiB'>2200</memory>
	I0314 18:18:03.445563  960722 main.go:141] libmachine: (ha-105786)   <vcpu>2</vcpu>
	I0314 18:18:03.445569  960722 main.go:141] libmachine: (ha-105786)   <features>
	I0314 18:18:03.445580  960722 main.go:141] libmachine: (ha-105786)     <acpi/>
	I0314 18:18:03.445591  960722 main.go:141] libmachine: (ha-105786)     <apic/>
	I0314 18:18:03.445618  960722 main.go:141] libmachine: (ha-105786)     <pae/>
	I0314 18:18:03.445666  960722 main.go:141] libmachine: (ha-105786)     
	I0314 18:18:03.445680  960722 main.go:141] libmachine: (ha-105786)   </features>
	I0314 18:18:03.445689  960722 main.go:141] libmachine: (ha-105786)   <cpu mode='host-passthrough'>
	I0314 18:18:03.445701  960722 main.go:141] libmachine: (ha-105786)   
	I0314 18:18:03.445718  960722 main.go:141] libmachine: (ha-105786)   </cpu>
	I0314 18:18:03.445746  960722 main.go:141] libmachine: (ha-105786)   <os>
	I0314 18:18:03.445770  960722 main.go:141] libmachine: (ha-105786)     <type>hvm</type>
	I0314 18:18:03.445785  960722 main.go:141] libmachine: (ha-105786)     <boot dev='cdrom'/>
	I0314 18:18:03.445793  960722 main.go:141] libmachine: (ha-105786)     <boot dev='hd'/>
	I0314 18:18:03.445807  960722 main.go:141] libmachine: (ha-105786)     <bootmenu enable='no'/>
	I0314 18:18:03.445818  960722 main.go:141] libmachine: (ha-105786)   </os>
	I0314 18:18:03.445830  960722 main.go:141] libmachine: (ha-105786)   <devices>
	I0314 18:18:03.445841  960722 main.go:141] libmachine: (ha-105786)     <disk type='file' device='cdrom'>
	I0314 18:18:03.445858  960722 main.go:141] libmachine: (ha-105786)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/boot2docker.iso'/>
	I0314 18:18:03.445877  960722 main.go:141] libmachine: (ha-105786)       <target dev='hdc' bus='scsi'/>
	I0314 18:18:03.445906  960722 main.go:141] libmachine: (ha-105786)       <readonly/>
	I0314 18:18:03.445923  960722 main.go:141] libmachine: (ha-105786)     </disk>
	I0314 18:18:03.445937  960722 main.go:141] libmachine: (ha-105786)     <disk type='file' device='disk'>
	I0314 18:18:03.445959  960722 main.go:141] libmachine: (ha-105786)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 18:18:03.445972  960722 main.go:141] libmachine: (ha-105786)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/ha-105786.rawdisk'/>
	I0314 18:18:03.445980  960722 main.go:141] libmachine: (ha-105786)       <target dev='hda' bus='virtio'/>
	I0314 18:18:03.445986  960722 main.go:141] libmachine: (ha-105786)     </disk>
	I0314 18:18:03.445992  960722 main.go:141] libmachine: (ha-105786)     <interface type='network'>
	I0314 18:18:03.445998  960722 main.go:141] libmachine: (ha-105786)       <source network='mk-ha-105786'/>
	I0314 18:18:03.446005  960722 main.go:141] libmachine: (ha-105786)       <model type='virtio'/>
	I0314 18:18:03.446010  960722 main.go:141] libmachine: (ha-105786)     </interface>
	I0314 18:18:03.446019  960722 main.go:141] libmachine: (ha-105786)     <interface type='network'>
	I0314 18:18:03.446027  960722 main.go:141] libmachine: (ha-105786)       <source network='default'/>
	I0314 18:18:03.446032  960722 main.go:141] libmachine: (ha-105786)       <model type='virtio'/>
	I0314 18:18:03.446040  960722 main.go:141] libmachine: (ha-105786)     </interface>
	I0314 18:18:03.446047  960722 main.go:141] libmachine: (ha-105786)     <serial type='pty'>
	I0314 18:18:03.446052  960722 main.go:141] libmachine: (ha-105786)       <target port='0'/>
	I0314 18:18:03.446059  960722 main.go:141] libmachine: (ha-105786)     </serial>
	I0314 18:18:03.446064  960722 main.go:141] libmachine: (ha-105786)     <console type='pty'>
	I0314 18:18:03.446072  960722 main.go:141] libmachine: (ha-105786)       <target type='serial' port='0'/>
	I0314 18:18:03.446080  960722 main.go:141] libmachine: (ha-105786)     </console>
	I0314 18:18:03.446096  960722 main.go:141] libmachine: (ha-105786)     <rng model='virtio'>
	I0314 18:18:03.446105  960722 main.go:141] libmachine: (ha-105786)       <backend model='random'>/dev/random</backend>
	I0314 18:18:03.446112  960722 main.go:141] libmachine: (ha-105786)     </rng>
	I0314 18:18:03.446117  960722 main.go:141] libmachine: (ha-105786)     
	I0314 18:18:03.446123  960722 main.go:141] libmachine: (ha-105786)     
	I0314 18:18:03.446128  960722 main.go:141] libmachine: (ha-105786)   </devices>
	I0314 18:18:03.446134  960722 main.go:141] libmachine: (ha-105786) </domain>
	I0314 18:18:03.446149  960722 main.go:141] libmachine: (ha-105786) 
	I0314 18:18:03.450443  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:71:4d:d4 in network default
	I0314 18:18:03.451073  960722 main.go:141] libmachine: (ha-105786) Ensuring networks are active...
	I0314 18:18:03.451111  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:03.451694  960722 main.go:141] libmachine: (ha-105786) Ensuring network default is active
	I0314 18:18:03.452099  960722 main.go:141] libmachine: (ha-105786) Ensuring network mk-ha-105786 is active
	I0314 18:18:03.452690  960722 main.go:141] libmachine: (ha-105786) Getting domain xml...
	I0314 18:18:03.453518  960722 main.go:141] libmachine: (ha-105786) Creating domain...
	I0314 18:18:04.631641  960722 main.go:141] libmachine: (ha-105786) Waiting to get IP...
	I0314 18:18:04.632510  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:04.632971  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:04.633014  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:04.632960  960745 retry.go:31] will retry after 277.79723ms: waiting for machine to come up
	I0314 18:18:04.912574  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:04.913043  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:04.913072  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:04.912993  960745 retry.go:31] will retry after 255.423441ms: waiting for machine to come up
	I0314 18:18:05.170565  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:05.171048  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:05.171074  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:05.171001  960745 retry.go:31] will retry after 442.032708ms: waiting for machine to come up
	I0314 18:18:05.614258  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:05.614756  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:05.614790  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:05.614644  960745 retry.go:31] will retry after 569.414403ms: waiting for machine to come up
	I0314 18:18:06.185359  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:06.185838  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:06.185871  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:06.185829  960745 retry.go:31] will retry after 718.712718ms: waiting for machine to come up
	I0314 18:18:06.906730  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:06.907546  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:06.907569  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:06.907509  960745 retry.go:31] will retry after 573.35881ms: waiting for machine to come up
	I0314 18:18:07.481989  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:07.482332  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:07.482649  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:07.482303  960745 retry.go:31] will retry after 978.743717ms: waiting for machine to come up
	I0314 18:18:08.462336  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:08.462863  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:08.462896  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:08.462796  960745 retry.go:31] will retry after 1.071065961s: waiting for machine to come up
	I0314 18:18:09.535145  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:09.535547  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:09.535575  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:09.535484  960745 retry.go:31] will retry after 1.510895728s: waiting for machine to come up
	I0314 18:18:11.048495  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:11.048970  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:11.049003  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:11.048904  960745 retry.go:31] will retry after 1.947807983s: waiting for machine to come up
	I0314 18:18:12.998012  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:12.998404  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:12.998435  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:12.998372  960745 retry.go:31] will retry after 2.168107958s: waiting for machine to come up
	I0314 18:18:15.169746  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:15.170086  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:15.170118  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:15.170046  960745 retry.go:31] will retry after 2.38476079s: waiting for machine to come up
	I0314 18:18:17.557544  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:17.557911  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:17.557936  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:17.557857  960745 retry.go:31] will retry after 3.672710927s: waiting for machine to come up
	I0314 18:18:21.234171  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:21.234560  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find current IP address of domain ha-105786 in network mk-ha-105786
	I0314 18:18:21.234588  960722 main.go:141] libmachine: (ha-105786) DBG | I0314 18:18:21.234513  960745 retry.go:31] will retry after 4.998566272s: waiting for machine to come up
	I0314 18:18:26.237299  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.237759  960722 main.go:141] libmachine: (ha-105786) Found IP for machine: 192.168.39.170
	I0314 18:18:26.237781  960722 main.go:141] libmachine: (ha-105786) Reserving static IP address...
	I0314 18:18:26.237803  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has current primary IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.238321  960722 main.go:141] libmachine: (ha-105786) DBG | unable to find host DHCP lease matching {name: "ha-105786", mac: "52:54:00:87:0a:bd", ip: "192.168.39.170"} in network mk-ha-105786
	I0314 18:18:26.314639  960722 main.go:141] libmachine: (ha-105786) DBG | Getting to WaitForSSH function...
	I0314 18:18:26.314675  960722 main.go:141] libmachine: (ha-105786) Reserved static IP address: 192.168.39.170
	I0314 18:18:26.314688  960722 main.go:141] libmachine: (ha-105786) Waiting for SSH to be available...
	I0314 18:18:26.317402  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.317776  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:minikube Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.317800  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.317964  960722 main.go:141] libmachine: (ha-105786) DBG | Using SSH client type: external
	I0314 18:18:26.317986  960722 main.go:141] libmachine: (ha-105786) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa (-rw-------)
	I0314 18:18:26.318017  960722 main.go:141] libmachine: (ha-105786) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.170 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 18:18:26.318032  960722 main.go:141] libmachine: (ha-105786) DBG | About to run SSH command:
	I0314 18:18:26.318045  960722 main.go:141] libmachine: (ha-105786) DBG | exit 0
	I0314 18:18:26.444372  960722 main.go:141] libmachine: (ha-105786) DBG | SSH cmd err, output: <nil>: 
	I0314 18:18:26.444671  960722 main.go:141] libmachine: (ha-105786) KVM machine creation complete!
	I0314 18:18:26.445014  960722 main.go:141] libmachine: (ha-105786) Calling .GetConfigRaw
	I0314 18:18:26.445659  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:26.445873  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:26.446084  960722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 18:18:26.446107  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:18:26.447559  960722 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 18:18:26.447574  960722 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 18:18:26.447581  960722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 18:18:26.447590  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:26.450086  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.450653  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.450681  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.450848  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:26.451046  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.451218  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.451347  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:26.451543  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:26.451787  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:26.451800  960722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 18:18:26.551832  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:18:26.551858  960722 main.go:141] libmachine: Detecting the provisioner...
	I0314 18:18:26.551867  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:26.554716  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.555203  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.555235  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.555328  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:26.555544  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.555754  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.555907  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:26.556127  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:26.556343  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:26.556356  960722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 18:18:26.657296  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 18:18:26.657397  960722 main.go:141] libmachine: found compatible host: buildroot
	I0314 18:18:26.657408  960722 main.go:141] libmachine: Provisioning with buildroot...
	I0314 18:18:26.657415  960722 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:18:26.657659  960722 buildroot.go:166] provisioning hostname "ha-105786"
	I0314 18:18:26.657682  960722 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:18:26.657839  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:26.660413  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.660745  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.660769  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.660865  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:26.661051  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.661216  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.661422  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:26.661574  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:26.661786  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:26.661802  960722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-105786 && echo "ha-105786" | sudo tee /etc/hostname
	I0314 18:18:26.776346  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786
	
	I0314 18:18:26.776384  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:26.778907  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.779350  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.779381  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.779528  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:26.779722  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.779925  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:26.780055  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:26.780228  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:26.780407  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:26.780423  960722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-105786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-105786/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-105786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:18:26.889916  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:18:26.889950  960722 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:18:26.889975  960722 buildroot.go:174] setting up certificates
	I0314 18:18:26.889992  960722 provision.go:84] configureAuth start
	I0314 18:18:26.890012  960722 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:18:26.890358  960722 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:18:26.893298  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.893677  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.893707  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.893884  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:26.896046  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.896322  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:26.896357  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:26.896481  960722 provision.go:143] copyHostCerts
	I0314 18:18:26.896522  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:18:26.896569  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 18:18:26.896579  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:18:26.896652  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:18:26.896739  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:18:26.896759  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 18:18:26.896767  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:18:26.896795  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:18:26.896844  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:18:26.896862  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 18:18:26.896870  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:18:26.896893  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:18:26.896988  960722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.ha-105786 san=[127.0.0.1 192.168.39.170 ha-105786 localhost minikube]
	I0314 18:18:27.042804  960722 provision.go:177] copyRemoteCerts
	I0314 18:18:27.042869  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:18:27.042897  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.045808  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.046142  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.046181  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.046360  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.046567  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.046761  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.046936  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:18:27.131474  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 18:18:27.131542  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 18:18:27.162593  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 18:18:27.162658  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:18:27.195241  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 18:18:27.195311  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0314 18:18:27.225509  960722 provision.go:87] duration metric: took 335.494563ms to configureAuth
	I0314 18:18:27.225538  960722 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:18:27.225717  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:18:27.225826  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.228516  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.228951  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.228975  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.229219  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.229452  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.229613  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.229733  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.229889  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:27.230076  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:27.230097  960722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:18:27.503688  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:18:27.503721  960722 main.go:141] libmachine: Checking connection to Docker...
	I0314 18:18:27.503740  960722 main.go:141] libmachine: (ha-105786) Calling .GetURL
	I0314 18:18:27.505347  960722 main.go:141] libmachine: (ha-105786) DBG | Using libvirt version 6000000
	I0314 18:18:27.507719  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.508007  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.508030  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.508259  960722 main.go:141] libmachine: Docker is up and running!
	I0314 18:18:27.508276  960722 main.go:141] libmachine: Reticulating splines...
	I0314 18:18:27.508285  960722 client.go:171] duration metric: took 24.523342142s to LocalClient.Create
	I0314 18:18:27.508310  960722 start.go:167] duration metric: took 24.523398961s to libmachine.API.Create "ha-105786"
	I0314 18:18:27.508324  960722 start.go:293] postStartSetup for "ha-105786" (driver="kvm2")
	I0314 18:18:27.508340  960722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:18:27.508363  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:27.508606  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:18:27.508624  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.510740  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.511066  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.511096  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.511296  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.511490  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.511689  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.511843  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:18:27.599488  960722 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:18:27.604015  960722 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:18:27.604038  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:18:27.604098  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:18:27.604193  960722 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 18:18:27.604207  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /etc/ssl/certs/9513112.pem
	I0314 18:18:27.604347  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:18:27.615221  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:18:27.640154  960722 start.go:296] duration metric: took 131.816012ms for postStartSetup
	I0314 18:18:27.640200  960722 main.go:141] libmachine: (ha-105786) Calling .GetConfigRaw
	I0314 18:18:27.640800  960722 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:18:27.643784  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.644155  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.644177  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.644466  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:18:27.644631  960722 start.go:128] duration metric: took 24.677595105s to createHost
	I0314 18:18:27.644656  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.646948  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.647438  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.647466  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.647632  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.647802  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.647978  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.648129  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.648350  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:18:27.648516  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:18:27.648534  960722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:18:27.749277  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440307.721502673
	
	I0314 18:18:27.749306  960722 fix.go:216] guest clock: 1710440307.721502673
	I0314 18:18:27.749317  960722 fix.go:229] Guest: 2024-03-14 18:18:27.721502673 +0000 UTC Remote: 2024-03-14 18:18:27.644643708 +0000 UTC m=+24.798488720 (delta=76.858965ms)
	I0314 18:18:27.749337  960722 fix.go:200] guest clock delta is within tolerance: 76.858965ms
	I0314 18:18:27.749343  960722 start.go:83] releasing machines lock for "ha-105786", held for 24.78237756s
	I0314 18:18:27.749363  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:27.749665  960722 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:18:27.752365  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.752715  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.752752  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.752902  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:27.753381  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:27.753570  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:18:27.753681  960722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:18:27.753727  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.753861  960722 ssh_runner.go:195] Run: cat /version.json
	I0314 18:18:27.753888  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:18:27.756457  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.756748  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.756783  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.756802  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.756899  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.757070  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.757179  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:27.757198  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:27.757223  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.757418  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:18:27.757432  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:18:27.757616  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:18:27.757775  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:18:27.757918  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:18:27.857480  960722 ssh_runner.go:195] Run: systemctl --version
	I0314 18:18:27.863499  960722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:18:28.026089  960722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:18:28.032954  960722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:18:28.033024  960722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:18:28.051341  960722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:18:28.051369  960722 start.go:494] detecting cgroup driver to use...
	I0314 18:18:28.051449  960722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:18:28.068602  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:18:28.083502  960722 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:18:28.083557  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:18:28.097189  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:18:28.110819  960722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:18:28.223881  960722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:18:28.371695  960722 docker.go:233] disabling docker service ...
	I0314 18:18:28.371781  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:18:28.386496  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:18:28.399599  960722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:18:28.528120  960722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:18:28.664621  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:18:28.678995  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:18:28.698960  960722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:18:28.699033  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:18:28.710540  960722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:18:28.710614  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:18:28.721780  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:18:28.732859  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:18:28.743777  960722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:18:28.755894  960722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:18:28.765722  960722 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 18:18:28.765767  960722 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 18:18:28.780136  960722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:18:28.789860  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:28.928565  960722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:18:29.066170  960722 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:18:29.066257  960722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:18:29.071920  960722 start.go:562] Will wait 60s for crictl version
	I0314 18:18:29.071968  960722 ssh_runner.go:195] Run: which crictl
	I0314 18:18:29.076359  960722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:18:29.122746  960722 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:18:29.122830  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:18:29.154125  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:18:29.186433  960722 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:18:29.187711  960722 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:18:29.190440  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:29.190762  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:18:29.190798  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:18:29.190991  960722 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:18:29.195470  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:18:29.209255  960722 kubeadm.go:877] updating cluster {Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 M
ountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:18:29.209404  960722 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:18:29.209461  960722 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:18:29.244992  960722 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 18:18:29.245061  960722 ssh_runner.go:195] Run: which lz4
	I0314 18:18:29.249342  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 -> /preloaded.tar.lz4
	I0314 18:18:29.249447  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 18:18:29.254169  960722 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 18:18:29.254197  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 18:18:31.084689  960722 crio.go:444] duration metric: took 1.835272399s to copy over tarball
	I0314 18:18:31.084777  960722 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 18:18:33.846290  960722 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.761478036s)
	I0314 18:18:33.846321  960722 crio.go:451] duration metric: took 2.761602368s to extract the tarball
	I0314 18:18:33.846328  960722 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 18:18:33.888938  960722 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:18:33.938435  960722 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:18:33.938464  960722 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:18:33.938474  960722 kubeadm.go:928] updating node { 192.168.39.170 8443 v1.28.4 crio true true} ...
	I0314 18:18:33.938623  960722 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-105786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:18:33.938711  960722 ssh_runner.go:195] Run: crio config
	I0314 18:18:34.006442  960722 cni.go:84] Creating CNI manager for ""
	I0314 18:18:34.006465  960722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 18:18:34.006477  960722 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:18:34.006504  960722 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-105786 NodeName:ha-105786 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:18:34.006632  960722 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-105786"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:18:34.006656  960722 kube-vip.go:105] generating kube-vip config ...
	I0314 18:18:34.006714  960722 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:18:34.006760  960722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:18:34.020479  960722 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:18:34.020550  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0314 18:18:34.035690  960722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0314 18:18:34.058819  960722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:18:34.081190  960722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0314 18:18:34.104015  960722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0314 18:18:34.122637  960722 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:18:34.127020  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:18:34.140470  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:18:34.273560  960722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:18:34.293533  960722 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786 for IP: 192.168.39.170
	I0314 18:18:34.293561  960722 certs.go:194] generating shared ca certs ...
	I0314 18:18:34.293579  960722 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.293771  960722 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:18:34.293832  960722 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:18:34.293846  960722 certs.go:256] generating profile certs ...
	I0314 18:18:34.293907  960722 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key
	I0314 18:18:34.293926  960722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt with IP's: []
	I0314 18:18:34.363624  960722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt ...
	I0314 18:18:34.363656  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt: {Name:mk521f0de305a43ea283b038c3d788bb59bfde56 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.363858  960722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key ...
	I0314 18:18:34.363873  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key: {Name:mk1a0bd182fa9492a498d0d5b485dad4277d90a6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.363978  960722 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5a8f449c
	I0314 18:18:34.364000  960722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5a8f449c with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170 192.168.39.254]
	I0314 18:18:34.461729  960722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5a8f449c ...
	I0314 18:18:34.461761  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5a8f449c: {Name:mk50c098325f62ce81d89b9e8c1f3ec90e4bf90a Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.461954  960722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5a8f449c ...
	I0314 18:18:34.461977  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5a8f449c: {Name:mk8aac17b0cfef91c6789f6d8dae3cb7806fcdd8 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.462078  960722 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5a8f449c -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt
	I0314 18:18:34.462205  960722 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5a8f449c -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key
	I0314 18:18:34.462288  960722 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key
	I0314 18:18:34.462315  960722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt with IP's: []
	I0314 18:18:34.610899  960722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt ...
	I0314 18:18:34.610932  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt: {Name:mk94ac1976010d6f666bb6ae031e119703a2dfaf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.611102  960722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key ...
	I0314 18:18:34.611114  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key: {Name:mkd90e890f5b0c8090e1d58015e8b16a4114332c Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:18:34.611187  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:18:34.611219  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:18:34.611239  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:18:34.611252  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:18:34.611265  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:18:34.611278  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:18:34.611290  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:18:34.611302  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:18:34.611347  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 18:18:34.611389  960722 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 18:18:34.611400  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:18:34.611421  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:18:34.611451  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:18:34.611479  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:18:34.611514  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:18:34.611548  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /usr/share/ca-certificates/9513112.pem
	I0314 18:18:34.611567  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:18:34.611579  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem -> /usr/share/ca-certificates/951311.pem
	I0314 18:18:34.612246  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:18:34.641651  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:18:34.669535  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:18:34.697069  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:18:34.724855  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 18:18:34.752813  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 18:18:34.780891  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:18:34.807976  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 18:18:34.835847  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 18:18:34.869677  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:18:34.896834  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 18:18:34.924111  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:18:34.942480  960722 ssh_runner.go:195] Run: openssl version
	I0314 18:18:34.948686  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 18:18:34.960274  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 18:18:34.965262  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:18:34.965314  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 18:18:34.971625  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:18:34.983752  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:18:34.995538  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:18:35.000530  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:18:35.000586  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:18:35.007110  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:18:35.018842  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 18:18:35.030720  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 18:18:35.035627  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:18:35.035682  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 18:18:35.041936  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 18:18:35.053705  960722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:18:35.058345  960722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:18:35.058405  960722 kubeadm.go:391] StartCluster: {Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 Moun
tType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:18:35.058506  960722 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 18:18:35.058576  960722 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 18:18:35.104718  960722 cri.go:89] found id: ""
	I0314 18:18:35.104799  960722 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 18:18:35.123713  960722 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 18:18:35.145008  960722 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 18:18:35.169536  960722 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 18:18:35.169584  960722 kubeadm.go:156] found existing configuration files:
	
	I0314 18:18:35.169661  960722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 18:18:35.187064  960722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 18:18:35.187175  960722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 18:18:35.209160  960722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 18:18:35.219891  960722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 18:18:35.219949  960722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 18:18:35.230159  960722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 18:18:35.240035  960722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 18:18:35.240101  960722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 18:18:35.250402  960722 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 18:18:35.260041  960722 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 18:18:35.260099  960722 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 18:18:35.270606  960722 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 18:18:35.527249  960722 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 18:18:50.279972  960722 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 18:18:50.280080  960722 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 18:18:50.280159  960722 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 18:18:50.280308  960722 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 18:18:50.280433  960722 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 18:18:50.280524  960722 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 18:18:50.282142  960722 out.go:204]   - Generating certificates and keys ...
	I0314 18:18:50.282262  960722 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 18:18:50.282340  960722 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 18:18:50.282461  960722 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 18:18:50.282556  960722 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 18:18:50.282659  960722 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 18:18:50.282711  960722 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 18:18:50.282771  960722 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 18:18:50.282909  960722 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [ha-105786 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0314 18:18:50.282985  960722 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 18:18:50.283149  960722 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [ha-105786 localhost] and IPs [192.168.39.170 127.0.0.1 ::1]
	I0314 18:18:50.283238  960722 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 18:18:50.283333  960722 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 18:18:50.283399  960722 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 18:18:50.283483  960722 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 18:18:50.283563  960722 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 18:18:50.283621  960722 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 18:18:50.283696  960722 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 18:18:50.283777  960722 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 18:18:50.283917  960722 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 18:18:50.284008  960722 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 18:18:50.285632  960722 out.go:204]   - Booting up control plane ...
	I0314 18:18:50.285748  960722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 18:18:50.285838  960722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 18:18:50.285942  960722 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 18:18:50.286072  960722 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 18:18:50.286210  960722 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 18:18:50.286282  960722 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 18:18:50.286480  960722 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 18:18:50.286585  960722 kubeadm.go:309] [apiclient] All control plane components are healthy after 9.627994 seconds
	I0314 18:18:50.286738  960722 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 18:18:50.286933  960722 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 18:18:50.287020  960722 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 18:18:50.287251  960722 kubeadm.go:309] [mark-control-plane] Marking the node ha-105786 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 18:18:50.287349  960722 kubeadm.go:309] [bootstrap-token] Using token: 7tm0k5.0klpcf5r6yb9tlsb
	I0314 18:18:50.288682  960722 out.go:204]   - Configuring RBAC rules ...
	I0314 18:18:50.288797  960722 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 18:18:50.288875  960722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 18:18:50.289023  960722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 18:18:50.289205  960722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 18:18:50.289385  960722 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 18:18:50.289496  960722 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 18:18:50.289624  960722 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 18:18:50.289682  960722 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 18:18:50.289750  960722 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 18:18:50.289763  960722 kubeadm.go:309] 
	I0314 18:18:50.289846  960722 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 18:18:50.289855  960722 kubeadm.go:309] 
	I0314 18:18:50.289933  960722 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 18:18:50.289943  960722 kubeadm.go:309] 
	I0314 18:18:50.289976  960722 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 18:18:50.290054  960722 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 18:18:50.290130  960722 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 18:18:50.290139  960722 kubeadm.go:309] 
	I0314 18:18:50.290190  960722 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 18:18:50.290199  960722 kubeadm.go:309] 
	I0314 18:18:50.290295  960722 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 18:18:50.290313  960722 kubeadm.go:309] 
	I0314 18:18:50.290372  960722 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 18:18:50.290437  960722 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 18:18:50.290501  960722 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 18:18:50.290510  960722 kubeadm.go:309] 
	I0314 18:18:50.290594  960722 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 18:18:50.290660  960722 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 18:18:50.290666  960722 kubeadm.go:309] 
	I0314 18:18:50.290733  960722 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token 7tm0k5.0klpcf5r6yb9tlsb \
	I0314 18:18:50.290855  960722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 18:18:50.290901  960722 kubeadm.go:309] 	--control-plane 
	I0314 18:18:50.290911  960722 kubeadm.go:309] 
	I0314 18:18:50.291003  960722 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 18:18:50.291017  960722 kubeadm.go:309] 
	I0314 18:18:50.291121  960722 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token 7tm0k5.0klpcf5r6yb9tlsb \
	I0314 18:18:50.291245  960722 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 18:18:50.291281  960722 cni.go:84] Creating CNI manager for ""
	I0314 18:18:50.291294  960722 cni.go:136] multinode detected (1 nodes found), recommending kindnet
	I0314 18:18:50.292938  960722 out.go:177] * Configuring CNI (Container Networking Interface) ...
	I0314 18:18:50.294349  960722 ssh_runner.go:195] Run: stat /opt/cni/bin/portmap
	I0314 18:18:50.315278  960722 cni.go:182] applying CNI manifest using /var/lib/minikube/binaries/v1.28.4/kubectl ...
	I0314 18:18:50.315302  960722 ssh_runner.go:362] scp memory --> /var/tmp/minikube/cni.yaml (2438 bytes)
	I0314 18:18:50.353422  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml
	I0314 18:18:51.467661  960722 ssh_runner.go:235] Completed: sudo /var/lib/minikube/binaries/v1.28.4/kubectl apply --kubeconfig=/var/lib/minikube/kubeconfig -f /var/tmp/minikube/cni.yaml: (1.114181908s)
	I0314 18:18:51.467713  960722 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 18:18:51.467838  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:51.467871  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-105786 minikube.k8s.io/updated_at=2024_03_14T18_18_51_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-105786 minikube.k8s.io/primary=true
	I0314 18:18:51.514442  960722 ops.go:34] apiserver oom_adj: -16
	I0314 18:18:51.681859  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:52.182201  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:52.682373  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:53.182046  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:53.682159  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:54.181904  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:54.682343  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:55.182245  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:55.681992  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:56.182155  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:56.682005  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:57.182337  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:57.682594  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:58.182384  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:58.682911  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:59.182017  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:18:59.682641  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:00.182000  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:00.682494  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 18:19:00.831620  960722 kubeadm.go:1106] duration metric: took 9.363874212s to wait for elevateKubeSystemPrivileges
	W0314 18:19:00.831664  960722 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 18:19:00.831672  960722 kubeadm.go:393] duration metric: took 25.773272774s to StartCluster
	I0314 18:19:00.831692  960722 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:00.831768  960722 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:19:00.832530  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:00.832732  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 18:19:00.832741  960722 start.go:232] HA (multi-control plane) cluster: will skip waiting for primary control-plane node &{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:19:00.832765  960722 start.go:240] waiting for startup goroutines ...
	I0314 18:19:00.832776  960722 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 18:19:00.832908  960722 addons.go:69] Setting storage-provisioner=true in profile "ha-105786"
	I0314 18:19:00.832935  960722 addons.go:69] Setting default-storageclass=true in profile "ha-105786"
	I0314 18:19:00.832981  960722 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "ha-105786"
	I0314 18:19:00.833010  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:19:00.832943  960722 addons.go:234] Setting addon storage-provisioner=true in "ha-105786"
	I0314 18:19:00.833076  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:19:00.833451  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:00.833474  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:00.833486  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:00.833497  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:00.849486  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38617
	I0314 18:19:00.849523  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40707
	I0314 18:19:00.849987  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:00.850007  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:00.850537  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:00.850557  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:00.850577  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:00.850598  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:00.850941  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:00.850971  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:00.851155  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:19:00.851489  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:00.851519  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:00.853457  960722 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:19:00.853818  960722 kapi.go:59] client config for ha-105786: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key", CAFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	I0314 18:19:00.854363  960722 cert_rotation.go:137] Starting client certificate rotation controller
	I0314 18:19:00.854579  960722 addons.go:234] Setting addon default-storageclass=true in "ha-105786"
	I0314 18:19:00.854626  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:19:00.854930  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:00.854953  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:00.866778  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36745
	I0314 18:19:00.867222  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:00.867721  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:00.867744  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:00.868150  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:00.868365  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:19:00.868987  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42499
	I0314 18:19:00.869450  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:00.869853  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:00.869875  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:00.870261  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:00.870431  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:19:00.872807  960722 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 18:19:00.870938  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:00.872845  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:00.874666  960722 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:19:00.874697  960722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 18:19:00.874717  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:19:00.877611  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:00.878117  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:19:00.878149  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:00.878392  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:19:00.878573  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:19:00.878724  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:19:00.878845  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:19:00.888259  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45349
	I0314 18:19:00.888656  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:00.889221  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:00.889247  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:00.889601  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:00.889812  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:19:00.891343  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:19:00.891635  960722 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 18:19:00.891656  960722 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 18:19:00.891682  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:19:00.894534  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:00.894921  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:19:00.894950  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:00.895115  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:19:00.895296  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:19:00.895478  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:19:00.895632  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:19:00.939242  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 18:19:01.006796  960722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 18:19:01.049695  960722 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 18:19:01.373612  960722 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0314 18:19:01.685782  960722 main.go:141] libmachine: Making call to close driver server
	I0314 18:19:01.685814  960722 main.go:141] libmachine: (ha-105786) Calling .Close
	I0314 18:19:01.685792  960722 main.go:141] libmachine: Making call to close driver server
	I0314 18:19:01.685886  960722 main.go:141] libmachine: (ha-105786) Calling .Close
	I0314 18:19:01.686174  960722 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:19:01.686197  960722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:19:01.686207  960722 main.go:141] libmachine: Making call to close driver server
	I0314 18:19:01.686214  960722 main.go:141] libmachine: (ha-105786) Calling .Close
	I0314 18:19:01.686220  960722 main.go:141] libmachine: (ha-105786) DBG | Closing plugin on server side
	I0314 18:19:01.686174  960722 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:19:01.686263  960722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:19:01.686284  960722 main.go:141] libmachine: Making call to close driver server
	I0314 18:19:01.686295  960722 main.go:141] libmachine: (ha-105786) Calling .Close
	I0314 18:19:01.686433  960722 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:19:01.686449  960722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:19:01.686449  960722 main.go:141] libmachine: (ha-105786) DBG | Closing plugin on server side
	I0314 18:19:01.686530  960722 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:19:01.686545  960722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:19:01.686594  960722 round_trippers.go:463] GET https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses
	I0314 18:19:01.686608  960722 round_trippers.go:469] Request Headers:
	I0314 18:19:01.686619  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:19:01.686626  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:19:01.697189  960722 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 18:19:01.697945  960722 round_trippers.go:463] PUT https://192.168.39.254:8443/apis/storage.k8s.io/v1/storageclasses/standard
	I0314 18:19:01.697962  960722 round_trippers.go:469] Request Headers:
	I0314 18:19:01.697970  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:19:01.697973  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:19:01.697977  960722 round_trippers.go:473]     Content-Type: application/json
	I0314 18:19:01.700816  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:19:01.701321  960722 main.go:141] libmachine: Making call to close driver server
	I0314 18:19:01.701335  960722 main.go:141] libmachine: (ha-105786) Calling .Close
	I0314 18:19:01.701571  960722 main.go:141] libmachine: Successfully made call to close driver server
	I0314 18:19:01.701592  960722 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 18:19:01.701613  960722 main.go:141] libmachine: (ha-105786) DBG | Closing plugin on server side
	I0314 18:19:01.703278  960722 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0314 18:19:01.704554  960722 addons.go:505] duration metric: took 871.776369ms for enable addons: enabled=[storage-provisioner default-storageclass]
	I0314 18:19:01.704593  960722 start.go:245] waiting for cluster config update ...
	I0314 18:19:01.704618  960722 start.go:254] writing updated cluster config ...
	I0314 18:19:01.706213  960722 out.go:177] 
	I0314 18:19:01.707745  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:19:01.707826  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:19:01.709513  960722 out.go:177] * Starting "ha-105786-m02" control-plane node in "ha-105786" cluster
	I0314 18:19:01.710629  960722 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:19:01.710662  960722 cache.go:56] Caching tarball of preloaded images
	I0314 18:19:01.710741  960722 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:19:01.710754  960722 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:19:01.710830  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:19:01.710990  960722 start.go:360] acquireMachinesLock for ha-105786-m02: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:19:01.711040  960722 start.go:364] duration metric: took 28.593µs to acquireMachinesLock for "ha-105786-m02"
	I0314 18:19:01.711064  960722 start.go:93] Provisioning new machine with config: &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 Cer
tExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m02 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:19:01.711163  960722 start.go:125] createHost starting for "m02" (driver="kvm2")
	I0314 18:19:01.712664  960722 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:19:01.712752  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:01.712781  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:01.728804  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44999
	I0314 18:19:01.729285  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:01.729841  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:01.729864  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:01.730219  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:01.730479  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetMachineName
	I0314 18:19:01.730650  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:01.730801  960722 start.go:159] libmachine.API.Create for "ha-105786" (driver="kvm2")
	I0314 18:19:01.730841  960722 client.go:168] LocalClient.Create starting
	I0314 18:19:01.730880  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 18:19:01.730922  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:19:01.730944  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:19:01.731013  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 18:19:01.731040  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:19:01.731056  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:19:01.731080  960722 main.go:141] libmachine: Running pre-create checks...
	I0314 18:19:01.731092  960722 main.go:141] libmachine: (ha-105786-m02) Calling .PreCreateCheck
	I0314 18:19:01.731262  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetConfigRaw
	I0314 18:19:01.731696  960722 main.go:141] libmachine: Creating machine...
	I0314 18:19:01.731712  960722 main.go:141] libmachine: (ha-105786-m02) Calling .Create
	I0314 18:19:01.731862  960722 main.go:141] libmachine: (ha-105786-m02) Creating KVM machine...
	I0314 18:19:01.733198  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found existing default KVM network
	I0314 18:19:01.733304  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found existing private KVM network mk-ha-105786
	I0314 18:19:01.733431  960722 main.go:141] libmachine: (ha-105786-m02) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02 ...
	I0314 18:19:01.733460  960722 main.go:141] libmachine: (ha-105786-m02) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 18:19:01.733506  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:01.733408  961068 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:19:01.733639  960722 main.go:141] libmachine: (ha-105786-m02) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:19:01.999912  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:01.999756  961068 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa...
	I0314 18:19:02.186279  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:02.186121  961068 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/ha-105786-m02.rawdisk...
	I0314 18:19:02.186316  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Writing magic tar header
	I0314 18:19:02.186329  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Writing SSH key tar header
	I0314 18:19:02.186341  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:02.186299  961068 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02 ...
	I0314 18:19:02.186468  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02
	I0314 18:19:02.186489  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 18:19:02.186503  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02 (perms=drwx------)
	I0314 18:19:02.186512  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:19:02.186520  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 18:19:02.186529  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 18:19:02.186537  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 18:19:02.186551  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 18:19:02.186565  960722 main.go:141] libmachine: (ha-105786-m02) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 18:19:02.186579  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 18:19:02.186616  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 18:19:02.186644  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home/jenkins
	I0314 18:19:02.186654  960722 main.go:141] libmachine: (ha-105786-m02) Creating domain...
	I0314 18:19:02.186673  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Checking permissions on dir: /home
	I0314 18:19:02.186685  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Skipping /home - not owner
	I0314 18:19:02.187570  960722 main.go:141] libmachine: (ha-105786-m02) define libvirt domain using xml: 
	I0314 18:19:02.187596  960722 main.go:141] libmachine: (ha-105786-m02) <domain type='kvm'>
	I0314 18:19:02.187606  960722 main.go:141] libmachine: (ha-105786-m02)   <name>ha-105786-m02</name>
	I0314 18:19:02.187619  960722 main.go:141] libmachine: (ha-105786-m02)   <memory unit='MiB'>2200</memory>
	I0314 18:19:02.187640  960722 main.go:141] libmachine: (ha-105786-m02)   <vcpu>2</vcpu>
	I0314 18:19:02.187648  960722 main.go:141] libmachine: (ha-105786-m02)   <features>
	I0314 18:19:02.187653  960722 main.go:141] libmachine: (ha-105786-m02)     <acpi/>
	I0314 18:19:02.187659  960722 main.go:141] libmachine: (ha-105786-m02)     <apic/>
	I0314 18:19:02.187665  960722 main.go:141] libmachine: (ha-105786-m02)     <pae/>
	I0314 18:19:02.187671  960722 main.go:141] libmachine: (ha-105786-m02)     
	I0314 18:19:02.187676  960722 main.go:141] libmachine: (ha-105786-m02)   </features>
	I0314 18:19:02.187682  960722 main.go:141] libmachine: (ha-105786-m02)   <cpu mode='host-passthrough'>
	I0314 18:19:02.187690  960722 main.go:141] libmachine: (ha-105786-m02)   
	I0314 18:19:02.187697  960722 main.go:141] libmachine: (ha-105786-m02)   </cpu>
	I0314 18:19:02.187729  960722 main.go:141] libmachine: (ha-105786-m02)   <os>
	I0314 18:19:02.187753  960722 main.go:141] libmachine: (ha-105786-m02)     <type>hvm</type>
	I0314 18:19:02.187763  960722 main.go:141] libmachine: (ha-105786-m02)     <boot dev='cdrom'/>
	I0314 18:19:02.187773  960722 main.go:141] libmachine: (ha-105786-m02)     <boot dev='hd'/>
	I0314 18:19:02.187781  960722 main.go:141] libmachine: (ha-105786-m02)     <bootmenu enable='no'/>
	I0314 18:19:02.187791  960722 main.go:141] libmachine: (ha-105786-m02)   </os>
	I0314 18:19:02.187799  960722 main.go:141] libmachine: (ha-105786-m02)   <devices>
	I0314 18:19:02.187810  960722 main.go:141] libmachine: (ha-105786-m02)     <disk type='file' device='cdrom'>
	I0314 18:19:02.187833  960722 main.go:141] libmachine: (ha-105786-m02)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/boot2docker.iso'/>
	I0314 18:19:02.187855  960722 main.go:141] libmachine: (ha-105786-m02)       <target dev='hdc' bus='scsi'/>
	I0314 18:19:02.187869  960722 main.go:141] libmachine: (ha-105786-m02)       <readonly/>
	I0314 18:19:02.187876  960722 main.go:141] libmachine: (ha-105786-m02)     </disk>
	I0314 18:19:02.187898  960722 main.go:141] libmachine: (ha-105786-m02)     <disk type='file' device='disk'>
	I0314 18:19:02.187907  960722 main.go:141] libmachine: (ha-105786-m02)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 18:19:02.187919  960722 main.go:141] libmachine: (ha-105786-m02)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/ha-105786-m02.rawdisk'/>
	I0314 18:19:02.187931  960722 main.go:141] libmachine: (ha-105786-m02)       <target dev='hda' bus='virtio'/>
	I0314 18:19:02.187944  960722 main.go:141] libmachine: (ha-105786-m02)     </disk>
	I0314 18:19:02.187961  960722 main.go:141] libmachine: (ha-105786-m02)     <interface type='network'>
	I0314 18:19:02.187982  960722 main.go:141] libmachine: (ha-105786-m02)       <source network='mk-ha-105786'/>
	I0314 18:19:02.187993  960722 main.go:141] libmachine: (ha-105786-m02)       <model type='virtio'/>
	I0314 18:19:02.188002  960722 main.go:141] libmachine: (ha-105786-m02)     </interface>
	I0314 18:19:02.188009  960722 main.go:141] libmachine: (ha-105786-m02)     <interface type='network'>
	I0314 18:19:02.188015  960722 main.go:141] libmachine: (ha-105786-m02)       <source network='default'/>
	I0314 18:19:02.188022  960722 main.go:141] libmachine: (ha-105786-m02)       <model type='virtio'/>
	I0314 18:19:02.188027  960722 main.go:141] libmachine: (ha-105786-m02)     </interface>
	I0314 18:19:02.188037  960722 main.go:141] libmachine: (ha-105786-m02)     <serial type='pty'>
	I0314 18:19:02.188054  960722 main.go:141] libmachine: (ha-105786-m02)       <target port='0'/>
	I0314 18:19:02.188071  960722 main.go:141] libmachine: (ha-105786-m02)     </serial>
	I0314 18:19:02.188080  960722 main.go:141] libmachine: (ha-105786-m02)     <console type='pty'>
	I0314 18:19:02.188091  960722 main.go:141] libmachine: (ha-105786-m02)       <target type='serial' port='0'/>
	I0314 18:19:02.188107  960722 main.go:141] libmachine: (ha-105786-m02)     </console>
	I0314 18:19:02.188118  960722 main.go:141] libmachine: (ha-105786-m02)     <rng model='virtio'>
	I0314 18:19:02.188129  960722 main.go:141] libmachine: (ha-105786-m02)       <backend model='random'>/dev/random</backend>
	I0314 18:19:02.188151  960722 main.go:141] libmachine: (ha-105786-m02)     </rng>
	I0314 18:19:02.188162  960722 main.go:141] libmachine: (ha-105786-m02)     
	I0314 18:19:02.188172  960722 main.go:141] libmachine: (ha-105786-m02)     
	I0314 18:19:02.188180  960722 main.go:141] libmachine: (ha-105786-m02)   </devices>
	I0314 18:19:02.188190  960722 main.go:141] libmachine: (ha-105786-m02) </domain>
	I0314 18:19:02.188200  960722 main.go:141] libmachine: (ha-105786-m02) 
	I0314 18:19:02.195479  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:3c:23:3b in network default
	I0314 18:19:02.196233  960722 main.go:141] libmachine: (ha-105786-m02) Ensuring networks are active...
	I0314 18:19:02.196255  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:02.197157  960722 main.go:141] libmachine: (ha-105786-m02) Ensuring network default is active
	I0314 18:19:02.197489  960722 main.go:141] libmachine: (ha-105786-m02) Ensuring network mk-ha-105786 is active
	I0314 18:19:02.197915  960722 main.go:141] libmachine: (ha-105786-m02) Getting domain xml...
	I0314 18:19:02.198754  960722 main.go:141] libmachine: (ha-105786-m02) Creating domain...
	I0314 18:19:03.462883  960722 main.go:141] libmachine: (ha-105786-m02) Waiting to get IP...
	I0314 18:19:03.464044  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:03.464497  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:03.464530  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:03.464461  961068 retry.go:31] will retry after 187.92215ms: waiting for machine to come up
	I0314 18:19:03.654271  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:03.654865  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:03.654891  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:03.654817  961068 retry.go:31] will retry after 341.857787ms: waiting for machine to come up
	I0314 18:19:03.998431  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:03.999032  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:03.999061  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:03.998976  961068 retry.go:31] will retry after 400.056291ms: waiting for machine to come up
	I0314 18:19:04.400712  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:04.401264  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:04.401300  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:04.401218  961068 retry.go:31] will retry after 423.388529ms: waiting for machine to come up
	I0314 18:19:04.825914  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:04.826470  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:04.826506  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:04.826407  961068 retry.go:31] will retry after 607.405727ms: waiting for machine to come up
	I0314 18:19:05.435370  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:05.435814  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:05.435837  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:05.435757  961068 retry.go:31] will retry after 608.06293ms: waiting for machine to come up
	I0314 18:19:06.045458  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:06.045972  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:06.046022  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:06.045918  961068 retry.go:31] will retry after 766.912118ms: waiting for machine to come up
	I0314 18:19:06.814534  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:06.815178  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:06.815214  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:06.815117  961068 retry.go:31] will retry after 940.207735ms: waiting for machine to come up
	I0314 18:19:07.756605  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:07.757086  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:07.757122  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:07.757024  961068 retry.go:31] will retry after 1.190260571s: waiting for machine to come up
	I0314 18:19:08.949393  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:08.949832  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:08.949857  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:08.949758  961068 retry.go:31] will retry after 1.987190642s: waiting for machine to come up
	I0314 18:19:10.939878  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:10.940509  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:10.940540  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:10.940440  961068 retry.go:31] will retry after 2.423045223s: waiting for machine to come up
	I0314 18:19:13.365954  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:13.366461  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:13.366495  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:13.366407  961068 retry.go:31] will retry after 3.422669414s: waiting for machine to come up
	I0314 18:19:16.790984  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:16.791433  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:16.791482  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:16.791385  961068 retry.go:31] will retry after 2.787821186s: waiting for machine to come up
	I0314 18:19:19.582366  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:19.582857  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find current IP address of domain ha-105786-m02 in network mk-ha-105786
	I0314 18:19:19.582881  960722 main.go:141] libmachine: (ha-105786-m02) DBG | I0314 18:19:19.582807  961068 retry.go:31] will retry after 3.642963538s: waiting for machine to come up
	I0314 18:19:23.228018  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.228396  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has current primary IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.228421  960722 main.go:141] libmachine: (ha-105786-m02) Found IP for machine: 192.168.39.245
	I0314 18:19:23.228432  960722 main.go:141] libmachine: (ha-105786-m02) Reserving static IP address...
	I0314 18:19:23.228854  960722 main.go:141] libmachine: (ha-105786-m02) DBG | unable to find host DHCP lease matching {name: "ha-105786-m02", mac: "52:54:00:c9:c4:3c", ip: "192.168.39.245"} in network mk-ha-105786
	I0314 18:19:23.304825  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Getting to WaitForSSH function...
	I0314 18:19:23.304858  960722 main.go:141] libmachine: (ha-105786-m02) Reserved static IP address: 192.168.39.245
	I0314 18:19:23.304877  960722 main.go:141] libmachine: (ha-105786-m02) Waiting for SSH to be available...
	I0314 18:19:23.307770  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.308270  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:minikube Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.308299  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.308468  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Using SSH client type: external
	I0314 18:19:23.308494  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa (-rw-------)
	I0314 18:19:23.308539  960722 main.go:141] libmachine: (ha-105786-m02) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.245 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 18:19:23.308558  960722 main.go:141] libmachine: (ha-105786-m02) DBG | About to run SSH command:
	I0314 18:19:23.308575  960722 main.go:141] libmachine: (ha-105786-m02) DBG | exit 0
	I0314 18:19:23.436637  960722 main.go:141] libmachine: (ha-105786-m02) DBG | SSH cmd err, output: <nil>: 
	I0314 18:19:23.436883  960722 main.go:141] libmachine: (ha-105786-m02) KVM machine creation complete!
	I0314 18:19:23.437267  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetConfigRaw
	I0314 18:19:23.437807  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:23.438017  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:23.438200  960722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 18:19:23.438213  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:19:23.439525  960722 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 18:19:23.439543  960722 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 18:19:23.439551  960722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 18:19:23.439559  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:23.442000  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.442286  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.442308  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.442472  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:23.442670  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.442845  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.442975  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:23.443136  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:23.443381  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:23.443397  960722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 18:19:23.555846  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:19:23.555877  960722 main.go:141] libmachine: Detecting the provisioner...
	I0314 18:19:23.555887  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:23.558575  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.558998  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.559039  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.559164  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:23.559397  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.559569  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.559738  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:23.559956  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:23.560131  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:23.560142  960722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 18:19:23.673387  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 18:19:23.673451  960722 main.go:141] libmachine: found compatible host: buildroot
	I0314 18:19:23.673458  960722 main.go:141] libmachine: Provisioning with buildroot...
	I0314 18:19:23.673466  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetMachineName
	I0314 18:19:23.673775  960722 buildroot.go:166] provisioning hostname "ha-105786-m02"
	I0314 18:19:23.673806  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetMachineName
	I0314 18:19:23.673992  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:23.676742  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.677073  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.677092  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.677223  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:23.677409  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.677566  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.677712  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:23.677878  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:23.678069  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:23.678086  960722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-105786-m02 && echo "ha-105786-m02" | sudo tee /etc/hostname
	I0314 18:19:23.809827  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786-m02
	
	I0314 18:19:23.809878  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:23.812967  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.813384  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.813410  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.813621  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:23.813812  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.813987  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:23.814117  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:23.814299  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:23.814509  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:23.814527  960722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-105786-m02' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-105786-m02/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-105786-m02' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:19:23.939089  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:19:23.939121  960722 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:19:23.939150  960722 buildroot.go:174] setting up certificates
	I0314 18:19:23.939170  960722 provision.go:84] configureAuth start
	I0314 18:19:23.939182  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetMachineName
	I0314 18:19:23.939507  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:19:23.942279  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.942667  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.942695  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.942867  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:23.945157  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.945498  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:23.945527  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:23.945633  960722 provision.go:143] copyHostCerts
	I0314 18:19:23.945685  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:19:23.945717  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 18:19:23.945727  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:19:23.945790  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:19:23.945910  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:19:23.945929  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 18:19:23.945943  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:19:23.945970  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:19:23.946027  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:19:23.946050  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 18:19:23.946057  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:19:23.946078  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:19:23.946155  960722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.ha-105786-m02 san=[127.0.0.1 192.168.39.245 ha-105786-m02 localhost minikube]
	I0314 18:19:24.180161  960722 provision.go:177] copyRemoteCerts
	I0314 18:19:24.180240  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:19:24.180269  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:24.182870  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.183182  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.183229  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.183344  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.183531  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.183692  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.183883  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	I0314 18:19:24.271554  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 18:19:24.271638  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:19:24.298761  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 18:19:24.298833  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 18:19:24.327009  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 18:19:24.327078  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 18:19:24.353565  960722 provision.go:87] duration metric: took 414.379263ms to configureAuth
	I0314 18:19:24.353593  960722 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:19:24.353751  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:19:24.353825  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:24.356549  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.356928  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.356952  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.357115  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.357324  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.357494  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.357675  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.357863  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:24.358075  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:24.358099  960722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:19:24.640322  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:19:24.640368  960722 main.go:141] libmachine: Checking connection to Docker...
	I0314 18:19:24.640381  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetURL
	I0314 18:19:24.641774  960722 main.go:141] libmachine: (ha-105786-m02) DBG | Using libvirt version 6000000
	I0314 18:19:24.643922  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.644313  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.644347  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.644512  960722 main.go:141] libmachine: Docker is up and running!
	I0314 18:19:24.644529  960722 main.go:141] libmachine: Reticulating splines...
	I0314 18:19:24.644537  960722 client.go:171] duration metric: took 22.913684551s to LocalClient.Create
	I0314 18:19:24.644561  960722 start.go:167] duration metric: took 22.913762805s to libmachine.API.Create "ha-105786"
	I0314 18:19:24.644572  960722 start.go:293] postStartSetup for "ha-105786-m02" (driver="kvm2")
	I0314 18:19:24.644591  960722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:19:24.644625  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:24.644893  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:19:24.644924  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:24.646994  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.647308  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.647340  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.647478  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.647656  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.647816  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.647952  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	I0314 18:19:24.735859  960722 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:19:24.740830  960722 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:19:24.740857  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:19:24.740920  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:19:24.740990  960722 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 18:19:24.741001  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /etc/ssl/certs/9513112.pem
	I0314 18:19:24.741084  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:19:24.751988  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:19:24.779482  960722 start.go:296] duration metric: took 134.895535ms for postStartSetup
	I0314 18:19:24.779534  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetConfigRaw
	I0314 18:19:24.780095  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:19:24.782938  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.783265  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.783293  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.783592  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:19:24.783778  960722 start.go:128] duration metric: took 23.072599138s to createHost
	I0314 18:19:24.783814  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:24.785917  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.786226  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.786257  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.786409  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.786560  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.786722  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.786854  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.787016  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:19:24.787204  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.245 22 <nil> <nil>}
	I0314 18:19:24.787217  960722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:19:24.901400  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440364.867809451
	
	I0314 18:19:24.901435  960722 fix.go:216] guest clock: 1710440364.867809451
	I0314 18:19:24.901446  960722 fix.go:229] Guest: 2024-03-14 18:19:24.867809451 +0000 UTC Remote: 2024-03-14 18:19:24.783790213 +0000 UTC m=+81.937635217 (delta=84.019238ms)
	I0314 18:19:24.901474  960722 fix.go:200] guest clock delta is within tolerance: 84.019238ms
	I0314 18:19:24.901482  960722 start.go:83] releasing machines lock for "ha-105786-m02", held for 23.190429572s
	I0314 18:19:24.901514  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:24.901832  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:19:24.904756  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.905107  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.905148  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.907291  960722 out.go:177] * Found network options:
	I0314 18:19:24.908683  960722 out.go:177]   - NO_PROXY=192.168.39.170
	W0314 18:19:24.909868  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:19:24.909900  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:24.910417  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:24.910606  960722 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:19:24.910718  960722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:19:24.910764  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	W0314 18:19:24.910798  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:19:24.910887  960722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:19:24.910913  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:19:24.913501  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.913745  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.913905  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.913933  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.914088  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.914208  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:24.914236  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:24.914249  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.914423  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:19:24.914441  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.914601  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:19:24.914597  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	I0314 18:19:24.914776  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:19:24.914935  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	I0314 18:19:25.161719  960722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:19:25.171333  960722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:19:25.171417  960722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:19:25.189248  960722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:19:25.189276  960722 start.go:494] detecting cgroup driver to use...
	I0314 18:19:25.189355  960722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:19:25.206412  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:19:25.221756  960722 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:19:25.221820  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:19:25.237185  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:19:25.252085  960722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:19:25.377806  960722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:19:25.526121  960722 docker.go:233] disabling docker service ...
	I0314 18:19:25.526190  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:19:25.542877  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:19:25.557697  960722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:19:25.715305  960722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:19:25.852627  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:19:25.871050  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:19:25.894623  960722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:19:25.894705  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:19:25.906913  960722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:19:25.906993  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:19:25.918809  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:19:25.930511  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:19:25.941955  960722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:19:25.953466  960722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:19:25.964052  960722 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 18:19:25.964112  960722 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 18:19:25.979587  960722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:19:25.990878  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:19:26.127427  960722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:19:26.283859  960722 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:19:26.283950  960722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:19:26.290144  960722 start.go:562] Will wait 60s for crictl version
	I0314 18:19:26.290219  960722 ssh_runner.go:195] Run: which crictl
	I0314 18:19:26.294233  960722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:19:26.335253  960722 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:19:26.335375  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:19:26.365918  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:19:26.399964  960722 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:19:26.401385  960722 out.go:177]   - env NO_PROXY=192.168.39.170
	I0314 18:19:26.402665  960722 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:19:26.405642  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:26.406041  960722 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:19:17 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:19:26.406083  960722 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:19:26.406334  960722 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:19:26.410963  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:19:26.425182  960722 mustload.go:65] Loading cluster: ha-105786
	I0314 18:19:26.425388  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:19:26.425664  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:26.425697  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:26.440672  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44447
	I0314 18:19:26.441113  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:26.441607  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:26.441631  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:26.441961  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:26.442159  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:19:26.443603  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:19:26.443940  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:26.443983  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:26.458381  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43285
	I0314 18:19:26.458760  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:26.459187  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:26.459205  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:26.459515  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:26.459689  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:19:26.459844  960722 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786 for IP: 192.168.39.245
	I0314 18:19:26.459856  960722 certs.go:194] generating shared ca certs ...
	I0314 18:19:26.459874  960722 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:26.460023  960722 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:19:26.460073  960722 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:19:26.460086  960722 certs.go:256] generating profile certs ...
	I0314 18:19:26.460177  960722 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key
	I0314 18:19:26.460252  960722 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.47b0118b
	I0314 18:19:26.460278  960722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.47b0118b with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170 192.168.39.245 192.168.39.254]
	I0314 18:19:26.627303  960722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.47b0118b ...
	I0314 18:19:26.627340  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.47b0118b: {Name:mka43f08d6f2befad5f191afd79378e4364c7b41 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:26.627543  960722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.47b0118b ...
	I0314 18:19:26.627561  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.47b0118b: {Name:mk69889debaf40240194e9108e35810aec9c2fdb Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:19:26.627660  960722 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.47b0118b -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt
	I0314 18:19:26.627937  960722 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.47b0118b -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key
	I0314 18:19:26.628125  960722 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key
	I0314 18:19:26.628149  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:19:26.628166  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:19:26.628184  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:19:26.628200  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:19:26.628238  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:19:26.628254  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:19:26.628268  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:19:26.628285  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:19:26.628351  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 18:19:26.628388  960722 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 18:19:26.628402  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:19:26.628443  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:19:26.628472  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:19:26.628505  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:19:26.628560  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:19:26.628597  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem -> /usr/share/ca-certificates/951311.pem
	I0314 18:19:26.628618  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /usr/share/ca-certificates/9513112.pem
	I0314 18:19:26.628638  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:26.628688  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:19:26.631676  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:26.632050  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:19:26.632074  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:26.632260  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:19:26.632483  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:19:26.632600  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:19:26.632702  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:19:26.704629  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0314 18:19:26.710520  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0314 18:19:26.723946  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0314 18:19:26.729482  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0314 18:19:26.742181  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0314 18:19:26.746858  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0314 18:19:26.759398  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0314 18:19:26.763892  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0314 18:19:26.776134  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0314 18:19:26.781079  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0314 18:19:26.793161  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0314 18:19:26.797438  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0314 18:19:26.809994  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:19:26.839682  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:19:26.867721  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:19:26.895815  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:19:26.923511  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0314 18:19:26.951247  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 18:19:26.977027  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:19:27.003274  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 18:19:27.029436  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 18:19:27.056352  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 18:19:27.081633  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:19:27.108286  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0314 18:19:27.126900  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0314 18:19:27.147015  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0314 18:19:27.166330  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0314 18:19:27.185637  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0314 18:19:27.205067  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0314 18:19:27.224522  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0314 18:19:27.243401  960722 ssh_runner.go:195] Run: openssl version
	I0314 18:19:27.249815  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:19:27.262629  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:27.267430  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:27.267491  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:19:27.273514  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:19:27.286162  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 18:19:27.298719  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 18:19:27.303524  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:19:27.303586  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 18:19:27.309729  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 18:19:27.322726  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 18:19:27.335678  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 18:19:27.340721  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:19:27.340768  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 18:19:27.346602  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:19:27.359268  960722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:19:27.363787  960722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:19:27.363852  960722 kubeadm.go:928] updating node {m02 192.168.39.245 8443 v1.28.4 crio true true} ...
	I0314 18:19:27.363940  960722 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-105786-m02 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.245
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:19:27.363962  960722 kube-vip.go:105] generating kube-vip config ...
	I0314 18:19:27.363993  960722 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:19:27.364028  960722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:19:27.375169  960722 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0314 18:19:27.375212  960722 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0314 18:19:27.386518  960722 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0314 18:19:27.386547  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:19:27.386617  960722 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubelet
	I0314 18:19:27.386631  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:19:27.386641  960722 download.go:107] Downloading: https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubeadm
	I0314 18:19:27.391262  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 18:19:27.391292  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0314 18:19:27.930878  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:19:27.930982  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:19:27.936615  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 18:19:27.936649  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0314 18:19:28.437626  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:19:28.456388  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:19:28.456476  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:19:28.461147  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 18:19:28.461178  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0314 18:19:28.969853  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0314 18:19:28.980260  960722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0314 18:19:28.998795  960722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:19:29.017462  960722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0314 18:19:29.035504  960722 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:19:29.039708  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:19:29.052609  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:19:29.182648  960722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:19:29.201749  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:19:29.202236  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:19:29.202279  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:19:29.218305  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34299
	I0314 18:19:29.218754  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:19:29.219288  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:19:29.219322  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:19:29.219723  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:19:29.219947  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:19:29.220109  960722 start.go:316] joinCluster: &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiratio
n:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:19:29.220250  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 18:19:29.220274  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:19:29.223156  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:29.223669  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:19:29.223705  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:19:29.223819  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:19:29.224027  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:19:29.224181  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:19:29.224345  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:19:29.387394  960722 start.go:342] trying to join control-plane node "m02" to cluster: &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:19:29.387458  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j8sw71.0n1hswrh9trtagi7 --discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-105786-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443"
	I0314 18:20:03.793170  960722 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token j8sw71.0n1hswrh9trtagi7 --discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-105786-m02 --control-plane --apiserver-advertise-address=192.168.39.245 --apiserver-bind-port=8443": (34.40567307s)
	I0314 18:20:03.793230  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 18:20:04.346563  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-105786-m02 minikube.k8s.io/updated_at=2024_03_14T18_20_04_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-105786 minikube.k8s.io/primary=false
	I0314 18:20:04.475492  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-105786-m02 node-role.kubernetes.io/control-plane:NoSchedule-
	I0314 18:20:04.605925  960722 start.go:318] duration metric: took 35.385808199s to joinCluster
	I0314 18:20:04.606049  960722 start.go:234] Will wait 6m0s for node &{Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:20:04.607661  960722 out.go:177] * Verifying Kubernetes components...
	I0314 18:20:04.606377  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:20:04.609186  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:20:04.843807  960722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:20:04.885426  960722 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:20:04.885833  960722 kapi.go:59] client config for ha-105786: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key", CAFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0314 18:20:04.885937  960722 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.170:8443
	I0314 18:20:04.886257  960722 node_ready.go:35] waiting up to 6m0s for node "ha-105786-m02" to be "Ready" ...
	I0314 18:20:04.886405  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:04.886417  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:04.886433  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:04.886441  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:04.899478  960722 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0314 18:20:05.387461  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:05.387484  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:05.387492  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:05.387498  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:05.392086  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:05.887327  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:05.887358  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:05.887372  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:05.887377  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:05.915477  960722 round_trippers.go:574] Response Status: 200 OK in 28 milliseconds
	I0314 18:20:06.386528  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:06.386564  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:06.386576  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:06.386581  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:06.391096  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:06.887108  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:06.887131  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:06.887142  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:06.887148  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:06.892327  960722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:20:06.893902  960722 node_ready.go:53] node "ha-105786-m02" has status "Ready":"False"
	I0314 18:20:07.386686  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:07.386711  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:07.386721  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:07.386724  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:07.394968  960722 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:20:07.887419  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:07.887445  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:07.887453  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:07.887457  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:07.892320  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:08.387227  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:08.387251  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:08.387260  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:08.387264  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:08.390902  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:08.886738  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:08.886765  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:08.886793  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:08.886797  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:08.890641  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:09.386590  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:09.386620  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:09.386631  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:09.386638  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:09.390267  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:09.390989  960722 node_ready.go:53] node "ha-105786-m02" has status "Ready":"False"
	I0314 18:20:09.886947  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:09.886973  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:09.886983  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:09.886988  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:09.890744  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.386910  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:10.386939  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.386949  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.386955  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.390382  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.391412  960722 node_ready.go:49] node "ha-105786-m02" has status "Ready":"True"
	I0314 18:20:10.391443  960722 node_ready.go:38] duration metric: took 5.505132422s for node "ha-105786-m02" to be "Ready" ...
	I0314 18:20:10.391458  960722 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:20:10.391558  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:20:10.391573  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.391583  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.391589  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.405003  960722 round_trippers.go:574] Response Status: 200 OK in 13 milliseconds
	I0314 18:20:10.411010  960722 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.411083  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-cx8rc
	I0314 18:20:10.411092  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.411099  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.411104  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.415483  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:10.416287  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:10.416316  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.416327  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.416332  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.419105  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:10.419998  960722 pod_ready.go:92] pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:10.420035  960722 pod_ready.go:81] duration metric: took 8.983237ms for pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.420044  960722 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.420089  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jsddl
	I0314 18:20:10.420098  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.420105  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.420110  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.423501  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.424705  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:10.424734  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.424745  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.424751  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.427816  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.428917  960722 pod_ready.go:92] pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:10.428932  960722 pod_ready.go:81] duration metric: took 8.882551ms for pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.428941  960722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.428994  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786
	I0314 18:20:10.429003  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.429010  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.429013  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.431546  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:10.432130  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:10.432143  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.432150  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.432153  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.435304  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.435905  960722 pod_ready.go:92] pod "etcd-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:10.435920  960722 pod_ready.go:81] duration metric: took 6.973841ms for pod "etcd-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.435929  960722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:10.435970  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:10.435979  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.435985  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.435990  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.438803  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:10.439389  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:10.439408  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.439418  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.439425  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.442360  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:10.936677  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:10.936706  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.936715  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.936719  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.940339  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:10.941210  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:10.941230  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:10.941237  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:10.941239  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:10.944113  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:11.437016  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:11.437038  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:11.437047  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:11.437050  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:11.440773  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:11.441314  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:11.441332  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:11.441338  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:11.441343  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:11.444263  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:11.936616  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:11.936637  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:11.936645  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:11.936650  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:11.940365  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:11.941270  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:11.941287  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:11.941294  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:11.941297  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:11.944311  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:12.436394  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:12.436416  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.436423  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.436428  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.440606  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:12.441378  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:12.441402  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.441413  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.441420  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.444611  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:12.445194  960722 pod_ready.go:102] pod "etcd-ha-105786-m02" in "kube-system" namespace has status "Ready":"False"
	I0314 18:20:12.937037  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:20:12.937061  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.937068  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.937072  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.941129  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:12.941848  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:12.941864  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.941871  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.941874  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.945312  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:12.945977  960722 pod_ready.go:92] pod "etcd-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:12.945997  960722 pod_ready.go:81] duration metric: took 2.510061796s for pod "etcd-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:12.946010  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:12.946058  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786
	I0314 18:20:12.946068  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.946075  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.946080  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.949209  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:12.949841  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:12.949858  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.949866  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.949870  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.952233  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:12.952748  960722 pod_ready.go:92] pod "kube-apiserver-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:12.952764  960722 pod_ready.go:81] duration metric: took 6.746234ms for pod "kube-apiserver-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:12.952772  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:12.987042  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786
	I0314 18:20:12.987057  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:12.987064  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:12.987069  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:12.989880  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:20:13.186957  960722 request.go:629] Waited for 196.361998ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:13.187046  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:13.187054  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:13.187060  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:13.187065  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:13.190577  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:13.191335  960722 pod_ready.go:92] pod "kube-controller-manager-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:13.191357  960722 pod_ready.go:81] duration metric: took 238.577402ms for pod "kube-controller-manager-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:13.191367  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:13.387841  960722 request.go:629] Waited for 196.392155ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786-m02
	I0314 18:20:13.387924  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786-m02
	I0314 18:20:13.387932  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:13.387940  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:13.387947  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:13.391798  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:13.588005  960722 request.go:629] Waited for 195.407854ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:13.588069  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:13.588076  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:13.588111  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:13.588123  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:13.591735  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:13.592845  960722 pod_ready.go:92] pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:13.592867  960722 pod_ready.go:81] duration metric: took 401.493419ms for pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:13.592876  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hd8mx" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:13.786933  960722 request.go:629] Waited for 193.970665ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd8mx
	I0314 18:20:13.787029  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd8mx
	I0314 18:20:13.787040  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:13.787053  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:13.787062  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:13.790577  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:13.986935  960722 request.go:629] Waited for 195.359941ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:13.987046  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:13.987080  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:13.987093  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:13.987097  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:13.991218  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:13.991858  960722 pod_ready.go:92] pod "kube-proxy-hd8mx" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:13.991880  960722 pod_ready.go:81] duration metric: took 398.997636ms for pod "kube-proxy-hd8mx" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:13.991890  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpz89" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:14.186927  960722 request.go:629] Waited for 194.956029ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpz89
	I0314 18:20:14.187037  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpz89
	I0314 18:20:14.187046  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:14.187095  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:14.187108  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:14.191255  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:14.387256  960722 request.go:629] Waited for 195.280121ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:14.387315  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:14.387320  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:14.387330  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:14.387334  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:14.390941  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:14.391600  960722 pod_ready.go:92] pod "kube-proxy-qpz89" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:14.391617  960722 pod_ready.go:81] duration metric: took 399.721436ms for pod "kube-proxy-qpz89" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:14.391631  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:14.587729  960722 request.go:629] Waited for 196.012378ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786
	I0314 18:20:14.587807  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786
	I0314 18:20:14.587812  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:14.587819  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:14.587823  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:14.592229  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:14.787292  960722 request.go:629] Waited for 194.393534ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:14.787359  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:20:14.787364  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:14.787371  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:14.787377  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:14.790943  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:14.791580  960722 pod_ready.go:92] pod "kube-scheduler-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:14.791602  960722 pod_ready.go:81] duration metric: took 399.959535ms for pod "kube-scheduler-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:14.791612  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:14.987819  960722 request.go:629] Waited for 196.101347ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m02
	I0314 18:20:14.987897  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m02
	I0314 18:20:14.987906  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:14.987914  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:14.987921  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:14.991143  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:20:15.187054  960722 request.go:629] Waited for 195.314505ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:15.187145  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:20:15.187152  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.187162  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.187168  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.192407  960722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:20:15.193184  960722 pod_ready.go:92] pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:20:15.193217  960722 pod_ready.go:81] duration metric: took 401.59897ms for pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:20:15.193232  960722 pod_ready.go:38] duration metric: took 4.801751417s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:20:15.193251  960722 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:20:15.193356  960722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:20:15.211663  960722 api_server.go:72] duration metric: took 10.605552869s to wait for apiserver process to appear ...
	I0314 18:20:15.211689  960722 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:20:15.211719  960722 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0314 18:20:15.216515  960722 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I0314 18:20:15.216590  960722 round_trippers.go:463] GET https://192.168.39.170:8443/version
	I0314 18:20:15.216601  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.216611  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.216621  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.217612  960722 round_trippers.go:574] Response Status: 200 OK in 0 milliseconds
	I0314 18:20:15.217737  960722 api_server.go:141] control plane version: v1.28.4
	I0314 18:20:15.217758  960722 api_server.go:131] duration metric: took 6.053816ms to wait for apiserver health ...
	I0314 18:20:15.217769  960722 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:20:15.387184  960722 request.go:629] Waited for 169.33673ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:20:15.387268  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:20:15.387276  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.387284  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.387290  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.392364  960722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:20:15.400517  960722 system_pods.go:59] 16 kube-system pods found
	I0314 18:20:15.400556  960722 system_pods.go:61] "coredns-5dd5756b68-cx8rc" [d2e960de-67a9-4385-ba02-78a744602bcc] Running
	I0314 18:20:15.400564  960722 system_pods.go:61] "coredns-5dd5756b68-jsddl" [bdbdea16-97b0-4581-8bab-9a472af11004] Running
	I0314 18:20:15.400569  960722 system_pods.go:61] "etcd-ha-105786" [a11d4142-2853-45c8-9433-c4a5d54fc66c] Running
	I0314 18:20:15.400575  960722 system_pods.go:61] "etcd-ha-105786-m02" [5bce83df-04d4-476b-8869-746c5900563e] Running
	I0314 18:20:15.400579  960722 system_pods.go:61] "kindnet-9b2pr" [e23e9c49-0b7d-46ca-ae62-11e9b26a1280] Running
	I0314 18:20:15.400583  960722 system_pods.go:61] "kindnet-vpgvl" [fcd2b2f2-848f-408e-8d28-fb54cf623210] Running
	I0314 18:20:15.400588  960722 system_pods.go:61] "kube-apiserver-ha-105786" [f8058168-3736-4279-9fa7-f4878d8361e1] Running
	I0314 18:20:15.400594  960722 system_pods.go:61] "kube-controller-manager-ha-105786" [0a5ec36c-be15-4649-a8f8-1dd9c5a0b87b] Running
	I0314 18:20:15.400603  960722 system_pods.go:61] "kube-controller-manager-ha-105786-m02" [d6fe9aea-613c-4bd8-8cb4-e3ac732feaa9] Running
	I0314 18:20:15.400608  960722 system_pods.go:61] "kube-proxy-hd8mx" [3e003f67-93dd-4105-a7bd-68d9af563ea4] Running
	I0314 18:20:15.400614  960722 system_pods.go:61] "kube-proxy-qpz89" [ca6a156c-9589-4200-bcfe-1537251ac9e2] Running
	I0314 18:20:15.400622  960722 system_pods.go:61] "kube-scheduler-ha-105786" [032b8718-b475-4155-b83b-1d065123f53f] Running
	I0314 18:20:15.400627  960722 system_pods.go:61] "kube-scheduler-ha-105786-m02" [f397b74c-e61a-498b-807b-002474ce63b2] Running
	I0314 18:20:15.400639  960722 system_pods.go:61] "kube-vip-ha-105786" [b310c7b3-e9d7-4f98-8df8-fdfb9f7754f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:20:15.400650  960722 system_pods.go:61] "kube-vip-ha-105786-m02" [5caee046-92ea-4315-b240-84ce4553d64e] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:20:15.400660  960722 system_pods.go:61] "storage-provisioner" [566fc43f-5610-4dcd-b683-1cc87e6ed609] Running
	I0314 18:20:15.400669  960722 system_pods.go:74] duration metric: took 182.888268ms to wait for pod list to return data ...
	I0314 18:20:15.400681  960722 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:20:15.587068  960722 request.go:629] Waited for 186.276852ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:20:15.587153  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:20:15.587163  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.587173  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.587181  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.591424  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:15.591776  960722 default_sa.go:45] found service account: "default"
	I0314 18:20:15.591804  960722 default_sa.go:55] duration metric: took 191.114412ms for default service account to be created ...
	I0314 18:20:15.591816  960722 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:20:15.787310  960722 request.go:629] Waited for 195.392339ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:20:15.787397  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:20:15.787404  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.787416  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.787423  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.796268  960722 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:20:15.801328  960722 system_pods.go:86] 17 kube-system pods found
	I0314 18:20:15.801360  960722 system_pods.go:89] "coredns-5dd5756b68-cx8rc" [d2e960de-67a9-4385-ba02-78a744602bcc] Running
	I0314 18:20:15.801369  960722 system_pods.go:89] "coredns-5dd5756b68-jsddl" [bdbdea16-97b0-4581-8bab-9a472af11004] Running
	I0314 18:20:15.801375  960722 system_pods.go:89] "etcd-ha-105786" [a11d4142-2853-45c8-9433-c4a5d54fc66c] Running
	I0314 18:20:15.801380  960722 system_pods.go:89] "etcd-ha-105786-m02" [5bce83df-04d4-476b-8869-746c5900563e] Running
	I0314 18:20:15.801386  960722 system_pods.go:89] "kindnet-9b2pr" [e23e9c49-0b7d-46ca-ae62-11e9b26a1280] Running
	I0314 18:20:15.801392  960722 system_pods.go:89] "kindnet-vpgvl" [fcd2b2f2-848f-408e-8d28-fb54cf623210] Running
	I0314 18:20:15.801398  960722 system_pods.go:89] "kube-apiserver-ha-105786" [f8058168-3736-4279-9fa7-f4878d8361e1] Running
	I0314 18:20:15.801403  960722 system_pods.go:89] "kube-apiserver-ha-105786-m02" [5f8798ed-58cc-45c8-a83c-217d99d40769] Pending
	I0314 18:20:15.801409  960722 system_pods.go:89] "kube-controller-manager-ha-105786" [0a5ec36c-be15-4649-a8f8-1dd9c5a0b87b] Running
	I0314 18:20:15.801419  960722 system_pods.go:89] "kube-controller-manager-ha-105786-m02" [d6fe9aea-613c-4bd8-8cb4-e3ac732feaa9] Running
	I0314 18:20:15.801428  960722 system_pods.go:89] "kube-proxy-hd8mx" [3e003f67-93dd-4105-a7bd-68d9af563ea4] Running
	I0314 18:20:15.801437  960722 system_pods.go:89] "kube-proxy-qpz89" [ca6a156c-9589-4200-bcfe-1537251ac9e2] Running
	I0314 18:20:15.801445  960722 system_pods.go:89] "kube-scheduler-ha-105786" [032b8718-b475-4155-b83b-1d065123f53f] Running
	I0314 18:20:15.801454  960722 system_pods.go:89] "kube-scheduler-ha-105786-m02" [f397b74c-e61a-498b-807b-002474ce63b2] Running
	I0314 18:20:15.801469  960722 system_pods.go:89] "kube-vip-ha-105786" [b310c7b3-e9d7-4f98-8df8-fdfb9f7754f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:20:15.801483  960722 system_pods.go:89] "kube-vip-ha-105786-m02" [5caee046-92ea-4315-b240-84ce4553d64e] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:20:15.801494  960722 system_pods.go:89] "storage-provisioner" [566fc43f-5610-4dcd-b683-1cc87e6ed609] Running
	I0314 18:20:15.801506  960722 system_pods.go:126] duration metric: took 209.682732ms to wait for k8s-apps to be running ...
	I0314 18:20:15.801517  960722 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:20:15.801583  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:20:15.824716  960722 system_svc.go:56] duration metric: took 23.189692ms WaitForService to wait for kubelet
	I0314 18:20:15.824758  960722 kubeadm.go:576] duration metric: took 11.218651856s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:20:15.824785  960722 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:20:15.987199  960722 request.go:629] Waited for 162.322824ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes
	I0314 18:20:15.987266  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes
	I0314 18:20:15.987271  960722 round_trippers.go:469] Request Headers:
	I0314 18:20:15.987279  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:20:15.987284  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:20:15.991330  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:20:15.992311  960722 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:20:15.992338  960722 node_conditions.go:123] node cpu capacity is 2
	I0314 18:20:15.992352  960722 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:20:15.992356  960722 node_conditions.go:123] node cpu capacity is 2
	I0314 18:20:15.992360  960722 node_conditions.go:105] duration metric: took 167.569722ms to run NodePressure ...
	I0314 18:20:15.992376  960722 start.go:240] waiting for startup goroutines ...
	I0314 18:20:15.992416  960722 start.go:254] writing updated cluster config ...
	I0314 18:20:15.994795  960722 out.go:177] 
	I0314 18:20:15.996915  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:20:15.997013  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:20:15.999109  960722 out.go:177] * Starting "ha-105786-m03" control-plane node in "ha-105786" cluster
	I0314 18:20:16.000469  960722 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:20:16.000499  960722 cache.go:56] Caching tarball of preloaded images
	I0314 18:20:16.000623  960722 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:20:16.000640  960722 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:20:16.000744  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:20:16.000982  960722 start.go:360] acquireMachinesLock for ha-105786-m03: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:20:16.001040  960722 start.go:364] duration metric: took 34.277µs to acquireMachinesLock for "ha-105786-m03"
	I0314 18:20:16.001062  960722 start.go:93] Provisioning new machine with config: &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-
dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptim
izations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name:m03 IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:20:16.001168  960722 start.go:125] createHost starting for "m03" (driver="kvm2")
	I0314 18:20:16.002892  960722 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 18:20:16.002985  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:20:16.003028  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:20:16.018758  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41145
	I0314 18:20:16.019190  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:20:16.019709  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:20:16.019730  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:20:16.020073  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:20:16.020330  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetMachineName
	I0314 18:20:16.020505  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:16.020687  960722 start.go:159] libmachine.API.Create for "ha-105786" (driver="kvm2")
	I0314 18:20:16.020720  960722 client.go:168] LocalClient.Create starting
	I0314 18:20:16.020748  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 18:20:16.020782  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:20:16.020798  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:20:16.020855  960722 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 18:20:16.020874  960722 main.go:141] libmachine: Decoding PEM data...
	I0314 18:20:16.020885  960722 main.go:141] libmachine: Parsing certificate...
	I0314 18:20:16.020901  960722 main.go:141] libmachine: Running pre-create checks...
	I0314 18:20:16.020909  960722 main.go:141] libmachine: (ha-105786-m03) Calling .PreCreateCheck
	I0314 18:20:16.021098  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetConfigRaw
	I0314 18:20:16.021494  960722 main.go:141] libmachine: Creating machine...
	I0314 18:20:16.021508  960722 main.go:141] libmachine: (ha-105786-m03) Calling .Create
	I0314 18:20:16.021673  960722 main.go:141] libmachine: (ha-105786-m03) Creating KVM machine...
	I0314 18:20:16.022984  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found existing default KVM network
	I0314 18:20:16.023177  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found existing private KVM network mk-ha-105786
	I0314 18:20:16.023318  960722 main.go:141] libmachine: (ha-105786-m03) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03 ...
	I0314 18:20:16.023344  960722 main.go:141] libmachine: (ha-105786-m03) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 18:20:16.023460  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:16.023300  961384 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:20:16.023554  960722 main.go:141] libmachine: (ha-105786-m03) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 18:20:16.271798  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:16.271649  961384 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa...
	I0314 18:20:16.379260  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:16.379112  961384 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/ha-105786-m03.rawdisk...
	I0314 18:20:16.379289  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Writing magic tar header
	I0314 18:20:16.379305  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Writing SSH key tar header
	I0314 18:20:16.379313  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:16.379258  961384 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03 ...
	I0314 18:20:16.379384  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03
	I0314 18:20:16.379457  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03 (perms=drwx------)
	I0314 18:20:16.379484  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 18:20:16.379499  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 18:20:16.379520  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:20:16.379534  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 18:20:16.379553  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 18:20:16.379570  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home/jenkins
	I0314 18:20:16.379585  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 18:20:16.379605  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 18:20:16.379619  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 18:20:16.379633  960722 main.go:141] libmachine: (ha-105786-m03) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 18:20:16.379644  960722 main.go:141] libmachine: (ha-105786-m03) Creating domain...
	I0314 18:20:16.379655  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Checking permissions on dir: /home
	I0314 18:20:16.379671  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Skipping /home - not owner
	I0314 18:20:16.380813  960722 main.go:141] libmachine: (ha-105786-m03) define libvirt domain using xml: 
	I0314 18:20:16.380837  960722 main.go:141] libmachine: (ha-105786-m03) <domain type='kvm'>
	I0314 18:20:16.380846  960722 main.go:141] libmachine: (ha-105786-m03)   <name>ha-105786-m03</name>
	I0314 18:20:16.380851  960722 main.go:141] libmachine: (ha-105786-m03)   <memory unit='MiB'>2200</memory>
	I0314 18:20:16.380857  960722 main.go:141] libmachine: (ha-105786-m03)   <vcpu>2</vcpu>
	I0314 18:20:16.380866  960722 main.go:141] libmachine: (ha-105786-m03)   <features>
	I0314 18:20:16.380873  960722 main.go:141] libmachine: (ha-105786-m03)     <acpi/>
	I0314 18:20:16.380877  960722 main.go:141] libmachine: (ha-105786-m03)     <apic/>
	I0314 18:20:16.380886  960722 main.go:141] libmachine: (ha-105786-m03)     <pae/>
	I0314 18:20:16.380893  960722 main.go:141] libmachine: (ha-105786-m03)     
	I0314 18:20:16.380911  960722 main.go:141] libmachine: (ha-105786-m03)   </features>
	I0314 18:20:16.380929  960722 main.go:141] libmachine: (ha-105786-m03)   <cpu mode='host-passthrough'>
	I0314 18:20:16.380943  960722 main.go:141] libmachine: (ha-105786-m03)   
	I0314 18:20:16.380949  960722 main.go:141] libmachine: (ha-105786-m03)   </cpu>
	I0314 18:20:16.380955  960722 main.go:141] libmachine: (ha-105786-m03)   <os>
	I0314 18:20:16.380973  960722 main.go:141] libmachine: (ha-105786-m03)     <type>hvm</type>
	I0314 18:20:16.380983  960722 main.go:141] libmachine: (ha-105786-m03)     <boot dev='cdrom'/>
	I0314 18:20:16.380993  960722 main.go:141] libmachine: (ha-105786-m03)     <boot dev='hd'/>
	I0314 18:20:16.381020  960722 main.go:141] libmachine: (ha-105786-m03)     <bootmenu enable='no'/>
	I0314 18:20:16.381035  960722 main.go:141] libmachine: (ha-105786-m03)   </os>
	I0314 18:20:16.381081  960722 main.go:141] libmachine: (ha-105786-m03)   <devices>
	I0314 18:20:16.381111  960722 main.go:141] libmachine: (ha-105786-m03)     <disk type='file' device='cdrom'>
	I0314 18:20:16.381134  960722 main.go:141] libmachine: (ha-105786-m03)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/boot2docker.iso'/>
	I0314 18:20:16.381147  960722 main.go:141] libmachine: (ha-105786-m03)       <target dev='hdc' bus='scsi'/>
	I0314 18:20:16.381160  960722 main.go:141] libmachine: (ha-105786-m03)       <readonly/>
	I0314 18:20:16.381170  960722 main.go:141] libmachine: (ha-105786-m03)     </disk>
	I0314 18:20:16.381184  960722 main.go:141] libmachine: (ha-105786-m03)     <disk type='file' device='disk'>
	I0314 18:20:16.381197  960722 main.go:141] libmachine: (ha-105786-m03)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 18:20:16.381214  960722 main.go:141] libmachine: (ha-105786-m03)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/ha-105786-m03.rawdisk'/>
	I0314 18:20:16.381225  960722 main.go:141] libmachine: (ha-105786-m03)       <target dev='hda' bus='virtio'/>
	I0314 18:20:16.381234  960722 main.go:141] libmachine: (ha-105786-m03)     </disk>
	I0314 18:20:16.381239  960722 main.go:141] libmachine: (ha-105786-m03)     <interface type='network'>
	I0314 18:20:16.381247  960722 main.go:141] libmachine: (ha-105786-m03)       <source network='mk-ha-105786'/>
	I0314 18:20:16.381252  960722 main.go:141] libmachine: (ha-105786-m03)       <model type='virtio'/>
	I0314 18:20:16.381257  960722 main.go:141] libmachine: (ha-105786-m03)     </interface>
	I0314 18:20:16.381262  960722 main.go:141] libmachine: (ha-105786-m03)     <interface type='network'>
	I0314 18:20:16.381282  960722 main.go:141] libmachine: (ha-105786-m03)       <source network='default'/>
	I0314 18:20:16.381298  960722 main.go:141] libmachine: (ha-105786-m03)       <model type='virtio'/>
	I0314 18:20:16.381312  960722 main.go:141] libmachine: (ha-105786-m03)     </interface>
	I0314 18:20:16.381323  960722 main.go:141] libmachine: (ha-105786-m03)     <serial type='pty'>
	I0314 18:20:16.381337  960722 main.go:141] libmachine: (ha-105786-m03)       <target port='0'/>
	I0314 18:20:16.381350  960722 main.go:141] libmachine: (ha-105786-m03)     </serial>
	I0314 18:20:16.381362  960722 main.go:141] libmachine: (ha-105786-m03)     <console type='pty'>
	I0314 18:20:16.381381  960722 main.go:141] libmachine: (ha-105786-m03)       <target type='serial' port='0'/>
	I0314 18:20:16.381394  960722 main.go:141] libmachine: (ha-105786-m03)     </console>
	I0314 18:20:16.381407  960722 main.go:141] libmachine: (ha-105786-m03)     <rng model='virtio'>
	I0314 18:20:16.381422  960722 main.go:141] libmachine: (ha-105786-m03)       <backend model='random'>/dev/random</backend>
	I0314 18:20:16.381432  960722 main.go:141] libmachine: (ha-105786-m03)     </rng>
	I0314 18:20:16.381442  960722 main.go:141] libmachine: (ha-105786-m03)     
	I0314 18:20:16.381459  960722 main.go:141] libmachine: (ha-105786-m03)     
	I0314 18:20:16.381473  960722 main.go:141] libmachine: (ha-105786-m03)   </devices>
	I0314 18:20:16.381485  960722 main.go:141] libmachine: (ha-105786-m03) </domain>
	I0314 18:20:16.381501  960722 main.go:141] libmachine: (ha-105786-m03) 
	I0314 18:20:16.388782  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:1b:7e:1f in network default
	I0314 18:20:16.389402  960722 main.go:141] libmachine: (ha-105786-m03) Ensuring networks are active...
	I0314 18:20:16.389429  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:16.390465  960722 main.go:141] libmachine: (ha-105786-m03) Ensuring network default is active
	I0314 18:20:16.390750  960722 main.go:141] libmachine: (ha-105786-m03) Ensuring network mk-ha-105786 is active
	I0314 18:20:16.391248  960722 main.go:141] libmachine: (ha-105786-m03) Getting domain xml...
	I0314 18:20:16.391969  960722 main.go:141] libmachine: (ha-105786-m03) Creating domain...
	I0314 18:20:17.589663  960722 main.go:141] libmachine: (ha-105786-m03) Waiting to get IP...
	I0314 18:20:17.590484  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:17.590914  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:17.590984  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:17.590911  961384 retry.go:31] will retry after 263.626588ms: waiting for machine to come up
	I0314 18:20:17.856532  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:17.857087  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:17.857116  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:17.857041  961384 retry.go:31] will retry after 382.637785ms: waiting for machine to come up
	I0314 18:20:18.241581  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:18.242030  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:18.242062  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:18.241981  961384 retry.go:31] will retry after 367.090897ms: waiting for machine to come up
	I0314 18:20:18.610712  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:18.611255  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:18.611281  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:18.611225  961384 retry.go:31] will retry after 600.586652ms: waiting for machine to come up
	I0314 18:20:19.213062  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:19.213584  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:19.213618  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:19.213525  961384 retry.go:31] will retry after 559.92281ms: waiting for machine to come up
	I0314 18:20:19.775309  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:19.775748  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:19.775781  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:19.775691  961384 retry.go:31] will retry after 574.524705ms: waiting for machine to come up
	I0314 18:20:20.351375  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:20.351868  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:20.351893  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:20.351811  961384 retry.go:31] will retry after 972.048987ms: waiting for machine to come up
	I0314 18:20:21.325550  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:21.326031  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:21.326063  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:21.325992  961384 retry.go:31] will retry after 1.371761698s: waiting for machine to come up
	I0314 18:20:22.699573  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:22.700021  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:22.700053  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:22.699967  961384 retry.go:31] will retry after 1.481455468s: waiting for machine to come up
	I0314 18:20:24.183618  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:24.184136  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:24.184168  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:24.184074  961384 retry.go:31] will retry after 1.805133143s: waiting for machine to come up
	I0314 18:20:25.991346  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:25.992156  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:25.992189  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:25.992110  961384 retry.go:31] will retry after 2.770039006s: waiting for machine to come up
	I0314 18:20:28.765632  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:28.766175  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:28.766252  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:28.766166  961384 retry.go:31] will retry after 3.54565346s: waiting for machine to come up
	I0314 18:20:32.313302  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:32.313795  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:32.313823  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:32.313752  961384 retry.go:31] will retry after 2.839983125s: waiting for machine to come up
	I0314 18:20:35.155209  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:35.155526  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find current IP address of domain ha-105786-m03 in network mk-ha-105786
	I0314 18:20:35.155549  960722 main.go:141] libmachine: (ha-105786-m03) DBG | I0314 18:20:35.155486  961384 retry.go:31] will retry after 4.973546957s: waiting for machine to come up
	I0314 18:20:40.133497  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.133990  960722 main.go:141] libmachine: (ha-105786-m03) Found IP for machine: 192.168.39.190
	I0314 18:20:40.134029  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has current primary IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.134053  960722 main.go:141] libmachine: (ha-105786-m03) Reserving static IP address...
	I0314 18:20:40.134396  960722 main.go:141] libmachine: (ha-105786-m03) DBG | unable to find host DHCP lease matching {name: "ha-105786-m03", mac: "52:54:00:34:3f:75", ip: "192.168.39.190"} in network mk-ha-105786
	I0314 18:20:40.214540  960722 main.go:141] libmachine: (ha-105786-m03) Reserved static IP address: 192.168.39.190
	I0314 18:20:40.214566  960722 main.go:141] libmachine: (ha-105786-m03) Waiting for SSH to be available...
	I0314 18:20:40.214617  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Getting to WaitForSSH function...
	I0314 18:20:40.217661  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.218202  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:minikube Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.218242  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.218357  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Using SSH client type: external
	I0314 18:20:40.218385  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa (-rw-------)
	I0314 18:20:40.218418  960722 main.go:141] libmachine: (ha-105786-m03) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.190 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 18:20:40.218436  960722 main.go:141] libmachine: (ha-105786-m03) DBG | About to run SSH command:
	I0314 18:20:40.218448  960722 main.go:141] libmachine: (ha-105786-m03) DBG | exit 0
	I0314 18:20:40.344502  960722 main.go:141] libmachine: (ha-105786-m03) DBG | SSH cmd err, output: <nil>: 
	I0314 18:20:40.344797  960722 main.go:141] libmachine: (ha-105786-m03) KVM machine creation complete!
	I0314 18:20:40.345142  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetConfigRaw
	I0314 18:20:40.345728  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:40.345981  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:40.346180  960722 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 18:20:40.346197  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:20:40.347581  960722 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 18:20:40.347596  960722 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 18:20:40.347602  960722 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 18:20:40.347609  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:40.350331  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.350772  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.350801  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.350966  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:40.351172  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.351341  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.351494  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:40.351708  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:40.352031  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:40.352047  960722 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 18:20:40.451470  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:20:40.451502  960722 main.go:141] libmachine: Detecting the provisioner...
	I0314 18:20:40.451512  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:40.454593  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.455030  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.455061  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.455263  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:40.455466  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.455629  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.455748  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:40.455909  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:40.456082  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:40.456094  960722 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 18:20:40.561434  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 18:20:40.561509  960722 main.go:141] libmachine: found compatible host: buildroot
	I0314 18:20:40.561523  960722 main.go:141] libmachine: Provisioning with buildroot...
	I0314 18:20:40.561535  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetMachineName
	I0314 18:20:40.561810  960722 buildroot.go:166] provisioning hostname "ha-105786-m03"
	I0314 18:20:40.561837  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetMachineName
	I0314 18:20:40.562093  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:40.564618  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.564980  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.565017  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.565118  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:40.565328  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.565529  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.565709  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:40.565881  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:40.566055  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:40.566068  960722 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-105786-m03 && echo "ha-105786-m03" | sudo tee /etc/hostname
	I0314 18:20:40.684093  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786-m03
	
	I0314 18:20:40.684125  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:40.686884  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.687210  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.687241  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.687381  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:40.687571  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.687749  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:40.687905  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:40.688073  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:40.688261  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:40.688278  960722 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-105786-m03' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-105786-m03/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-105786-m03' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:20:40.797541  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:20:40.797575  960722 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:20:40.797596  960722 buildroot.go:174] setting up certificates
	I0314 18:20:40.797611  960722 provision.go:84] configureAuth start
	I0314 18:20:40.797623  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetMachineName
	I0314 18:20:40.797919  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:20:40.800767  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.801185  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.801218  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.801418  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:40.804200  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.804646  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:40.804679  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:40.804861  960722 provision.go:143] copyHostCerts
	I0314 18:20:40.804893  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:20:40.804925  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 18:20:40.804935  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:20:40.805001  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:20:40.805072  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:20:40.805089  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 18:20:40.805096  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:20:40.805119  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:20:40.805162  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:20:40.805178  960722 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 18:20:40.805184  960722 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:20:40.805203  960722 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:20:40.805289  960722 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.ha-105786-m03 san=[127.0.0.1 192.168.39.190 ha-105786-m03 localhost minikube]
	I0314 18:20:41.054914  960722 provision.go:177] copyRemoteCerts
	I0314 18:20:41.054977  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:20:41.055004  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:41.057639  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.057975  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.057998  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.058194  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.058387  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.058565  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.058698  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:20:41.144719  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 18:20:41.144803  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:20:41.171816  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 18:20:41.171887  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1208 bytes)
	I0314 18:20:41.199381  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 18:20:41.199468  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 18:20:41.226957  960722 provision.go:87] duration metric: took 429.333138ms to configureAuth
	I0314 18:20:41.226985  960722 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:20:41.227214  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:20:41.227307  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:41.229976  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.230427  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.230463  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.230679  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.230914  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.231089  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.231272  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.231498  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:41.231734  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:41.231753  960722 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:20:41.512036  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:20:41.512069  960722 main.go:141] libmachine: Checking connection to Docker...
	I0314 18:20:41.512080  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetURL
	I0314 18:20:41.513530  960722 main.go:141] libmachine: (ha-105786-m03) DBG | Using libvirt version 6000000
	I0314 18:20:41.516319  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.516759  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.516797  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.516916  960722 main.go:141] libmachine: Docker is up and running!
	I0314 18:20:41.516932  960722 main.go:141] libmachine: Reticulating splines...
	I0314 18:20:41.516952  960722 client.go:171] duration metric: took 25.496210948s to LocalClient.Create
	I0314 18:20:41.516990  960722 start.go:167] duration metric: took 25.49630446s to libmachine.API.Create "ha-105786"
	I0314 18:20:41.517002  960722 start.go:293] postStartSetup for "ha-105786-m03" (driver="kvm2")
	I0314 18:20:41.517019  960722 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:20:41.517042  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:41.517289  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:20:41.517320  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:41.519485  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.519861  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.519889  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.520049  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.520272  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.520449  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.520604  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:20:41.605448  960722 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:20:41.610671  960722 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:20:41.610704  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:20:41.610786  960722 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:20:41.610873  960722 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 18:20:41.610886  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /etc/ssl/certs/9513112.pem
	I0314 18:20:41.611012  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:20:41.621811  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:20:41.652036  960722 start.go:296] duration metric: took 135.017948ms for postStartSetup
	I0314 18:20:41.652091  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetConfigRaw
	I0314 18:20:41.652721  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:20:41.655406  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.655870  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.655905  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.656145  960722 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:20:41.656406  960722 start.go:128] duration metric: took 25.655223653s to createHost
	I0314 18:20:41.656441  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:41.659003  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.659391  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.659412  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.659549  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.659758  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.659932  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.660071  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.660253  960722 main.go:141] libmachine: Using SSH client type: native
	I0314 18:20:41.660456  960722 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.190 22 <nil> <nil>}
	I0314 18:20:41.660470  960722 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:20:41.761414  960722 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440441.743729611
	
	I0314 18:20:41.761440  960722 fix.go:216] guest clock: 1710440441.743729611
	I0314 18:20:41.761447  960722 fix.go:229] Guest: 2024-03-14 18:20:41.743729611 +0000 UTC Remote: 2024-03-14 18:20:41.656424316 +0000 UTC m=+158.810269334 (delta=87.305295ms)
	I0314 18:20:41.761464  960722 fix.go:200] guest clock delta is within tolerance: 87.305295ms
	I0314 18:20:41.761469  960722 start.go:83] releasing machines lock for "ha-105786-m03", held for 25.760417756s
	I0314 18:20:41.761487  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:41.761771  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:20:41.764594  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.764999  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.765031  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.767266  960722 out.go:177] * Found network options:
	I0314 18:20:41.768711  960722 out.go:177]   - NO_PROXY=192.168.39.170,192.168.39.245
	W0314 18:20:41.769997  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:20:41.770017  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:20:41.770030  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:41.770550  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:41.770778  960722 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:20:41.770920  960722 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:20:41.770963  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	W0314 18:20:41.770999  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	W0314 18:20:41.771026  960722 proxy.go:119] fail to check proxy env: Error ip not in block
	I0314 18:20:41.771103  960722 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:20:41.771128  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:20:41.773706  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.774056  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.774090  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.774108  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.774292  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.774468  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.774564  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:41.774589  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:41.774648  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.774759  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:20:41.774883  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:20:41.774977  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:20:41.775149  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:20:41.775315  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:20:42.013893  960722 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:20:42.021607  960722 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:20:42.021679  960722 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:20:42.039748  960722 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 18:20:42.039777  960722 start.go:494] detecting cgroup driver to use...
	I0314 18:20:42.039853  960722 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:20:42.059119  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:20:42.074558  960722 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:20:42.074617  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:20:42.089661  960722 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:20:42.104256  960722 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:20:42.233317  960722 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:20:42.392041  960722 docker.go:233] disabling docker service ...
	I0314 18:20:42.392130  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:20:42.408543  960722 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:20:42.422591  960722 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:20:42.563722  960722 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:20:42.688792  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:20:42.704444  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:20:42.725324  960722 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:20:42.725397  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:20:42.737561  960722 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:20:42.737618  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:20:42.749624  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:20:42.761367  960722 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:20:42.773962  960722 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:20:42.786135  960722 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:20:42.796972  960722 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 18:20:42.797027  960722 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 18:20:42.811989  960722 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:20:42.822647  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:20:42.950792  960722 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:20:43.106453  960722 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:20:43.106542  960722 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:20:43.112384  960722 start.go:562] Will wait 60s for crictl version
	I0314 18:20:43.112441  960722 ssh_runner.go:195] Run: which crictl
	I0314 18:20:43.116759  960722 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:20:43.158761  960722 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:20:43.158863  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:20:43.192334  960722 ssh_runner.go:195] Run: crio --version
	I0314 18:20:43.229877  960722 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:20:43.231273  960722 out.go:177]   - env NO_PROXY=192.168.39.170
	I0314 18:20:43.232575  960722 out.go:177]   - env NO_PROXY=192.168.39.170,192.168.39.245
	I0314 18:20:43.233764  960722 main.go:141] libmachine: (ha-105786-m03) Calling .GetIP
	I0314 18:20:43.236996  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:43.237429  960722 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:20:43.237458  960722 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:20:43.237711  960722 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:20:43.242307  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:20:43.255831  960722 mustload.go:65] Loading cluster: ha-105786
	I0314 18:20:43.256090  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:20:43.256496  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:20:43.256558  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:20:43.272927  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41451
	I0314 18:20:43.273365  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:20:43.273806  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:20:43.273828  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:20:43.274143  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:20:43.274328  960722 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:20:43.275764  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:20:43.276038  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:20:43.276073  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:20:43.290709  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38899
	I0314 18:20:43.291151  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:20:43.291649  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:20:43.291671  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:20:43.291987  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:20:43.292178  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:20:43.292379  960722 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786 for IP: 192.168.39.190
	I0314 18:20:43.292400  960722 certs.go:194] generating shared ca certs ...
	I0314 18:20:43.292421  960722 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:20:43.292562  960722 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:20:43.292601  960722 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:20:43.292610  960722 certs.go:256] generating profile certs ...
	I0314 18:20:43.292676  960722 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key
	I0314 18:20:43.292700  960722 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5008f644
	I0314 18:20:43.292714  960722 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5008f644 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170 192.168.39.245 192.168.39.190 192.168.39.254]
	I0314 18:20:43.369573  960722 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5008f644 ...
	I0314 18:20:43.369603  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5008f644: {Name:mk26652353e711860e9741d7f16cc8eff62446e9 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:20:43.369780  960722 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5008f644 ...
	I0314 18:20:43.369798  960722 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5008f644: {Name:mk3152f16716880926c7353afe7016ddf0844e54 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:20:43.369893  960722 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.5008f644 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt
	I0314 18:20:43.370044  960722 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.5008f644 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key
	I0314 18:20:43.370200  960722 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key
	I0314 18:20:43.370219  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:20:43.370245  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:20:43.370267  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:20:43.370286  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:20:43.370304  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:20:43.370322  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:20:43.370338  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:20:43.370356  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:20:43.370422  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 18:20:43.370464  960722 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 18:20:43.370478  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:20:43.370515  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:20:43.370547  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:20:43.370577  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:20:43.370632  960722 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:20:43.370671  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:20:43.370693  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem -> /usr/share/ca-certificates/951311.pem
	I0314 18:20:43.370715  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /usr/share/ca-certificates/9513112.pem
	I0314 18:20:43.370757  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:20:43.373565  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:20:43.373911  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:20:43.373940  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:20:43.374033  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:20:43.374212  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:20:43.374353  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:20:43.374478  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:20:43.448486  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.pub
	I0314 18:20:43.453764  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.pub --> memory (451 bytes)
	I0314 18:20:43.466380  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/sa.key
	I0314 18:20:43.470988  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/sa.key --> memory (1679 bytes)
	I0314 18:20:43.481676  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.crt
	I0314 18:20:43.486231  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.crt --> memory (1123 bytes)
	I0314 18:20:43.498692  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/front-proxy-ca.key
	I0314 18:20:43.503320  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/front-proxy-ca.key --> memory (1675 bytes)
	I0314 18:20:43.520987  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.crt
	I0314 18:20:43.526932  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.crt --> memory (1094 bytes)
	I0314 18:20:43.543635  960722 ssh_runner.go:195] Run: stat -c %!s(MISSING) /var/lib/minikube/certs/etcd/ca.key
	I0314 18:20:43.548839  960722 ssh_runner.go:447] scp /var/lib/minikube/certs/etcd/ca.key --> memory (1679 bytes)
	I0314 18:20:43.561552  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:20:43.590213  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:20:43.617864  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:20:43.645512  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:20:43.673965  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1444 bytes)
	I0314 18:20:43.700045  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 18:20:43.724879  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:20:43.751227  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 18:20:43.779325  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:20:43.805830  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 18:20:43.835382  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 18:20:43.865476  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.pub (451 bytes)
	I0314 18:20:43.883583  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/sa.key (1679 bytes)
	I0314 18:20:43.902725  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.crt (1123 bytes)
	I0314 18:20:43.921088  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/front-proxy-ca.key (1675 bytes)
	I0314 18:20:43.938665  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.crt (1094 bytes)
	I0314 18:20:43.959055  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/certs/etcd/ca.key (1679 bytes)
	I0314 18:20:43.978359  960722 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (744 bytes)
	I0314 18:20:43.997134  960722 ssh_runner.go:195] Run: openssl version
	I0314 18:20:44.005825  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:20:44.018041  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:20:44.023061  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:20:44.023121  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:20:44.029198  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:20:44.041694  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 18:20:44.055167  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 18:20:44.060067  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:20:44.060127  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 18:20:44.066294  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 18:20:44.079907  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 18:20:44.093439  960722 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 18:20:44.098784  960722 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:20:44.098837  960722 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 18:20:44.105059  960722 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:20:44.117241  960722 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:20:44.121721  960722 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 18:20:44.121776  960722 kubeadm.go:928] updating node {m03 192.168.39.190 8443 v1.28.4 crio true true} ...
	I0314 18:20:44.121874  960722 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-105786-m03 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.190
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:20:44.121899  960722 kube-vip.go:105] generating kube-vip config ...
	I0314 18:20:44.121930  960722 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:20:44.121988  960722 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:20:44.132607  960722 binaries.go:47] Didn't find k8s binaries: sudo ls /var/lib/minikube/binaries/v1.28.4: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/binaries/v1.28.4': No such file or directory
	
	Initiating transfer...
	I0314 18:20:44.132649  960722 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/binaries/v1.28.4
	I0314 18:20:44.144753  960722 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubectl.sha256
	I0314 18:20:44.144777  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubectl -> /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:20:44.144823  960722 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubeadm.sha256
	I0314 18:20:44.144848  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl
	I0314 18:20:44.144853  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubeadm -> /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:20:44.144963  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm
	I0314 18:20:44.144823  960722 binary.go:76] Not caching binary, using https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet?checksum=file:https://dl.k8s.io/release/v1.28.4/bin/linux/amd64/kubelet.sha256
	I0314 18:20:44.145038  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:20:44.155969  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubectl: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubectl: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubectl': No such file or directory
	I0314 18:20:44.155999  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubectl --> /var/lib/minikube/binaries/v1.28.4/kubectl (49885184 bytes)
	I0314 18:20:44.156019  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubeadm: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubeadm: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubeadm': No such file or directory
	I0314 18:20:44.156047  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubeadm --> /var/lib/minikube/binaries/v1.28.4/kubeadm (49102848 bytes)
	I0314 18:20:44.205345  960722 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubelet -> /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:20:44.205461  960722 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet
	I0314 18:20:44.314449  960722 ssh_runner.go:352] existence check for /var/lib/minikube/binaries/v1.28.4/kubelet: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/binaries/v1.28.4/kubelet: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/binaries/v1.28.4/kubelet': No such file or directory
	I0314 18:20:44.314494  960722 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/linux/amd64/v1.28.4/kubelet --> /var/lib/minikube/binaries/v1.28.4/kubelet (110850048 bytes)
	I0314 18:20:45.214882  960722 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /etc/kubernetes/manifests
	I0314 18:20:45.226985  960722 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (313 bytes)
	I0314 18:20:45.246323  960722 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:20:45.265747  960722 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0314 18:20:45.284264  960722 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:20:45.288879  960722 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.254	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 18:20:45.302625  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:20:45.430647  960722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:20:45.449240  960722 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:20:45.449585  960722 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:20:45.449637  960722 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:20:45.465079  960722 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42307
	I0314 18:20:45.465530  960722 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:20:45.466048  960722 main.go:141] libmachine: Using API Version  1
	I0314 18:20:45.466073  960722 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:20:45.466441  960722 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:20:45.466665  960722 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:20:45.466918  960722 start.go:316] joinCluster: &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cluster
Name:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:f
alse DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:20:45.467093  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm token create --print-join-command --ttl=0"
	I0314 18:20:45.467117  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:20:45.470410  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:20:45.470818  960722 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:20:45.470857  960722 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:20:45.471038  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:20:45.471223  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:20:45.471367  960722 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:20:45.471556  960722 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:20:45.642229  960722 start.go:342] trying to join control-plane node "m03" to cluster: &{Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:20:45.642294  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zioowz.ncs2n2q41aci2frn --discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-105786-m03 --control-plane --apiserver-advertise-address=192.168.39.190 --apiserver-bind-port=8443"
	I0314 18:21:14.250837  960722 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm join control-plane.minikube.internal:8443 --token zioowz.ncs2n2q41aci2frn --discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d --ignore-preflight-errors=all --cri-socket unix:///var/run/crio/crio.sock --node-name=ha-105786-m03 --control-plane --apiserver-advertise-address=192.168.39.190 --apiserver-bind-port=8443": (28.60851409s)
	I0314 18:21:14.250884  960722 ssh_runner.go:195] Run: /bin/bash -c "sudo systemctl daemon-reload && sudo systemctl enable kubelet && sudo systemctl start kubelet"
	I0314 18:21:14.918133  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes ha-105786-m03 minikube.k8s.io/updated_at=2024_03_14T18_21_14_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=ha-105786 minikube.k8s.io/primary=false
	I0314 18:21:15.063537  960722 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig taint nodes ha-105786-m03 node-role.kubernetes.io/control-plane:NoSchedule-
	I0314 18:21:15.219595  960722 start.go:318] duration metric: took 29.752671612s to joinCluster
	I0314 18:21:15.219679  960722 start.go:234] Will wait 6m0s for node &{Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 18:21:15.221279  960722 out.go:177] * Verifying Kubernetes components...
	I0314 18:21:15.220130  960722 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:21:15.222930  960722 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:21:15.558226  960722 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:21:15.584548  960722 loader.go:395] Config loaded from file:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:21:15.584870  960722 kapi.go:59] client config for ha-105786: &rest.Config{Host:"https://192.168.39.254:8443", APIPath:"", ContentConfig:rest.ContentConfig{AcceptContentTypes:"", ContentType:"", GroupVersion:(*schema.GroupVersion)(nil), NegotiatedSerializer:runtime.NegotiatedSerializer(nil)}, Username:"", Password:"", BearerToken:"", BearerTokenFile:"", Impersonate:rest.ImpersonationConfig{UserName:"", UID:"", Groups:[]string(nil), Extra:map[string][]string(nil)}, AuthProvider:<nil>, AuthConfigPersister:rest.AuthProviderConfigPersister(nil), ExecProvider:<nil>, TLSClientConfig:rest.sanitizedTLSClientConfig{Insecure:false, ServerName:"", CertFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.crt", KeyFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key", CAFile:"/home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt", CertData:[]uint8(nil), KeyData:[]uint8(nil), CAData:[]uint8(nil), NextProtos:[]string(n
il)}, UserAgent:"", DisableCompression:false, Transport:http.RoundTripper(nil), WrapTransport:(transport.WrapperFunc)(0x1c55c80), QPS:0, Burst:0, RateLimiter:flowcontrol.RateLimiter(nil), WarningHandler:rest.WarningHandler(nil), Timeout:0, Dial:(func(context.Context, string, string) (net.Conn, error))(nil), Proxy:(func(*http.Request) (*url.URL, error))(nil)}
	W0314 18:21:15.584938  960722 kubeadm.go:477] Overriding stale ClientConfig host https://192.168.39.254:8443 with https://192.168.39.170:8443
	I0314 18:21:15.585248  960722 node_ready.go:35] waiting up to 6m0s for node "ha-105786-m03" to be "Ready" ...
	I0314 18:21:15.585338  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:15.585351  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:15.585363  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:15.585370  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:15.589124  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:16.085923  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:16.085947  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:16.085984  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:16.085989  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:16.091301  960722 round_trippers.go:574] Response Status: 200 OK in 5 milliseconds
	I0314 18:21:16.585699  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:16.585731  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:16.585743  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:16.585751  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:16.589694  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:17.086243  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:17.086264  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:17.086272  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:17.086277  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:17.090180  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:17.585728  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:17.585755  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:17.585766  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:17.585772  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:17.589402  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:17.590184  960722 node_ready.go:53] node "ha-105786-m03" has status "Ready":"False"
	I0314 18:21:18.086352  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:18.086382  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:18.086391  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:18.086396  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:18.089751  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:18.585826  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:18.585849  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:18.585857  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:18.585869  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:18.589603  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:19.086199  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:19.086226  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:19.086237  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:19.086242  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:19.090256  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:19.585614  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:19.585638  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:19.585646  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:19.585650  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:19.589705  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:19.590523  960722 node_ready.go:53] node "ha-105786-m03" has status "Ready":"False"
	I0314 18:21:20.085912  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:20.085938  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.085947  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.085951  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.090230  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:20.090977  960722 node_ready.go:49] node "ha-105786-m03" has status "Ready":"True"
	I0314 18:21:20.090995  960722 node_ready.go:38] duration metric: took 4.505728967s for node "ha-105786-m03" to be "Ready" ...
	I0314 18:21:20.091004  960722 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:21:20.091076  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:21:20.091091  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.091100  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.091105  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.098186  960722 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:21:20.105087  960722 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.105177  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-cx8rc
	I0314 18:21:20.105188  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.105205  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.105215  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.108258  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.108884  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:20.108896  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.108902  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.108907  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.112261  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.112820  960722 pod_ready.go:92] pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:20.112837  960722 pod_ready.go:81] duration metric: took 7.728308ms for pod "coredns-5dd5756b68-cx8rc" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.112845  960722 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.112898  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/coredns-5dd5756b68-jsddl
	I0314 18:21:20.112907  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.112913  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.112917  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.119213  960722 round_trippers.go:574] Response Status: 200 OK in 6 milliseconds
	I0314 18:21:20.119815  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:20.119832  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.119838  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.119843  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.123274  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.123867  960722 pod_ready.go:92] pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:20.123883  960722 pod_ready.go:81] duration metric: took 11.0316ms for pod "coredns-5dd5756b68-jsddl" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.123891  960722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.123936  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786
	I0314 18:21:20.123943  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.123950  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.123954  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.126695  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:21:20.127411  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:20.127427  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.127434  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.127441  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.130847  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.131509  960722 pod_ready.go:92] pod "etcd-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:20.131530  960722 pod_ready.go:81] duration metric: took 7.63204ms for pod "etcd-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.131541  960722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.131600  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m02
	I0314 18:21:20.131612  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.131621  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.131629  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.135022  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.135511  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:20.135525  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.135532  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.135535  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.139674  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:20.142312  960722 pod_ready.go:92] pod "etcd-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:20.142331  960722 pod_ready.go:81] duration metric: took 10.781899ms for pod "etcd-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.142342  960722 pod_ready.go:78] waiting up to 6m0s for pod "etcd-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:20.286690  960722 request.go:629] Waited for 144.264427ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:20.286773  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:20.286778  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.286785  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.286789  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.290137  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:20.486312  960722 request.go:629] Waited for 195.438244ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:20.486381  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:20.486387  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.486396  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.486402  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.491146  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:20.686288  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:20.686320  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.686332  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.686338  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.690392  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:20.886129  960722 request.go:629] Waited for 195.138224ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:20.886203  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:20.886207  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:20.886215  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:20.886218  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:20.889915  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:21.143239  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:21.143263  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:21.143272  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:21.143276  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:21.147377  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:21.286413  960722 request.go:629] Waited for 138.318793ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:21.286498  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:21.286510  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:21.286520  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:21.286528  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:21.290446  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:21.643256  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:21.643281  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:21.643289  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:21.643294  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:21.647657  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:21.686994  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:21.687013  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:21.687022  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:21.687033  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:21.690519  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:22.143217  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:22.143241  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:22.143249  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:22.143254  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:22.147683  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:22.148777  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:22.148795  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:22.148805  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:22.148810  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:22.153712  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:22.154465  960722 pod_ready.go:102] pod "etcd-ha-105786-m03" in "kube-system" namespace has status "Ready":"False"
	I0314 18:21:22.642789  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:22.642819  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:22.642830  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:22.642845  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:22.653779  960722 round_trippers.go:574] Response Status: 200 OK in 10 milliseconds
	I0314 18:21:22.654524  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:22.654540  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:22.654549  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:22.654556  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:22.662537  960722 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:21:23.143313  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:23.143339  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:23.143350  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:23.143355  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:23.147905  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:23.148858  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:23.148878  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:23.148889  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:23.148895  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:23.152464  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:23.642960  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:23.642992  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:23.643004  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:23.643009  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:23.651505  960722 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:21:23.652300  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:23.652320  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:23.652327  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:23.652331  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:23.656566  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:24.142543  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:24.142567  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.142575  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.142579  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.146469  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.147168  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:24.147183  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.147190  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.147195  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.150467  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.642886  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/etcd-ha-105786-m03
	I0314 18:21:24.642910  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.642918  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.642922  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.646490  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.647195  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:24.647209  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.647217  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.647222  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.650491  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.650957  960722 pod_ready.go:92] pod "etcd-ha-105786-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:24.650979  960722 pod_ready.go:81] duration metric: took 4.508629759s for pod "etcd-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:24.650997  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:24.651052  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786
	I0314 18:21:24.651062  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.651069  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.651072  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.654107  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.655255  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:24.655276  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.655286  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.655293  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.657959  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:21:24.658539  960722 pod_ready.go:92] pod "kube-apiserver-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:24.658562  960722 pod_ready.go:81] duration metric: took 7.558089ms for pod "kube-apiserver-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:24.658574  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:24.686920  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m02
	I0314 18:21:24.686945  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.686959  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.686965  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.690465  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:24.886526  960722 request.go:629] Waited for 195.399001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:24.886599  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:24.886606  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:24.886618  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:24.886627  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:24.890692  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:24.891235  960722 pod_ready.go:92] pod "kube-apiserver-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:24.891256  960722 pod_ready.go:81] duration metric: took 232.674585ms for pod "kube-apiserver-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:24.891265  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:25.086643  960722 request.go:629] Waited for 195.304841ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:25.086746  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:25.086758  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:25.086770  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:25.086781  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:25.091072  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:25.286900  960722 request.go:629] Waited for 194.900934ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:25.286995  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:25.287002  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:25.287010  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:25.287017  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:25.290890  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:25.486216  960722 request.go:629] Waited for 94.675787ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:25.486324  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:25.486336  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:25.486347  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:25.486351  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:25.490683  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:25.686301  960722 request.go:629] Waited for 194.320717ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:25.686404  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:25.686413  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:25.686422  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:25.686430  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:25.690545  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:25.892206  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:25.892259  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:25.892272  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:25.892278  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:25.895673  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:26.086940  960722 request.go:629] Waited for 190.282738ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:26.087018  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:26.087024  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:26.087032  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:26.087036  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:26.091882  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:26.391690  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:26.391712  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:26.391720  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:26.391730  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:26.395703  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:26.486175  960722 request.go:629] Waited for 89.650069ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:26.486243  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:26.486250  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:26.486261  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:26.486271  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:26.489646  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:26.892192  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:26.892251  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:26.892264  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:26.892290  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:26.896240  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:26.897407  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:26.897426  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:26.897436  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:26.897443  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:26.900712  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:26.901357  960722 pod_ready.go:102] pod "kube-apiserver-ha-105786-m03" in "kube-system" namespace has status "Ready":"False"
	I0314 18:21:27.392547  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:27.392578  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:27.392591  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:27.392598  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:27.396799  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:27.397812  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:27.397841  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:27.397852  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:27.397858  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:27.401108  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:27.891460  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:27.891484  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:27.891491  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:27.891494  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:27.895037  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:27.895718  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:27.895740  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:27.895751  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:27.895757  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:27.899227  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:28.391887  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:28.391916  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:28.391928  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:28.391936  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:28.399750  960722 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:21:28.401603  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:28.401629  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:28.401640  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:28.401648  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:28.405076  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:28.891943  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:28.891977  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:28.891989  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:28.891997  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:28.896023  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:28.897007  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:28.897029  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:28.897036  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:28.897039  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:28.900075  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:29.392371  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:29.392396  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.392407  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.392413  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.396392  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:29.397535  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:29.397552  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.397559  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.397564  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.400717  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:29.401356  960722 pod_ready.go:102] pod "kube-apiserver-ha-105786-m03" in "kube-system" namespace has status "Ready":"False"
	I0314 18:21:29.891508  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-apiserver-ha-105786-m03
	I0314 18:21:29.891533  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.891541  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.891546  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.895729  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:29.897024  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:29.897046  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.897057  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.897064  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.900241  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:29.900998  960722 pod_ready.go:92] pod "kube-apiserver-ha-105786-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:29.901022  960722 pod_ready.go:81] duration metric: took 5.009747113s for pod "kube-apiserver-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:29.901035  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:29.901103  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786
	I0314 18:21:29.901114  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.901124  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.901129  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.904264  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:29.905115  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:29.905132  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.905143  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.905148  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.907940  960722 round_trippers.go:574] Response Status: 200 OK in 2 milliseconds
	I0314 18:21:29.908513  960722 pod_ready.go:92] pod "kube-controller-manager-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:29.908530  960722 pod_ready.go:81] duration metric: took 7.488189ms for pod "kube-controller-manager-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:29.908539  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:29.908599  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786-m02
	I0314 18:21:29.908607  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:29.908614  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:29.908622  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:29.911967  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:30.086896  960722 request.go:629] Waited for 174.32646ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:30.086970  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:30.086981  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:30.086993  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:30.086999  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:30.091325  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:30.091816  960722 pod_ready.go:92] pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:30.091840  960722 pod_ready.go:81] duration metric: took 183.294651ms for pod "kube-controller-manager-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:30.091850  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:30.286334  960722 request.go:629] Waited for 194.37948ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786-m03
	I0314 18:21:30.286420  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ha-105786-m03
	I0314 18:21:30.286426  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:30.286434  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:30.286446  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:30.290575  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:30.486791  960722 request.go:629] Waited for 195.30278ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:30.486864  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:30.486875  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:30.486886  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:30.486894  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:30.490449  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:30.490992  960722 pod_ready.go:92] pod "kube-controller-manager-ha-105786-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:30.491014  960722 pod_ready.go:81] duration metric: took 399.156594ms for pod "kube-controller-manager-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:30.491025  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-6rjsv" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:30.686013  960722 request.go:629] Waited for 194.876678ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rjsv
	I0314 18:21:30.686073  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-6rjsv
	I0314 18:21:30.686078  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:30.686085  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:30.686089  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:30.690974  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:30.886508  960722 request.go:629] Waited for 194.08458ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:30.886569  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:30.886574  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:30.886581  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:30.886585  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:30.890076  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:30.890748  960722 pod_ready.go:92] pod "kube-proxy-6rjsv" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:30.890766  960722 pod_ready.go:81] duration metric: took 399.734743ms for pod "kube-proxy-6rjsv" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:30.890776  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hd8mx" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:31.086344  960722 request.go:629] Waited for 195.462369ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd8mx
	I0314 18:21:31.086404  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-hd8mx
	I0314 18:21:31.086410  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:31.086420  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:31.086426  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:31.090034  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:31.286093  960722 request.go:629] Waited for 195.283418ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:31.286192  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:31.286203  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:31.286211  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:31.286215  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:31.289646  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:31.290391  960722 pod_ready.go:92] pod "kube-proxy-hd8mx" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:31.290411  960722 pod_ready.go:81] duration metric: took 399.629073ms for pod "kube-proxy-hd8mx" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:31.290422  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-qpz89" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:31.486491  960722 request.go:629] Waited for 195.976001ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpz89
	I0314 18:21:31.486547  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-proxy-qpz89
	I0314 18:21:31.486552  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:31.486560  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:31.486565  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:31.490567  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:31.686972  960722 request.go:629] Waited for 195.411714ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:31.687050  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:31.687061  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:31.687073  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:31.687106  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:31.691407  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:31.692074  960722 pod_ready.go:92] pod "kube-proxy-qpz89" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:31.692095  960722 pod_ready.go:81] duration metric: took 401.664857ms for pod "kube-proxy-qpz89" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:31.692104  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:31.886657  960722 request.go:629] Waited for 194.461564ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786
	I0314 18:21:31.886737  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786
	I0314 18:21:31.886749  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:31.886782  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:31.886804  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:31.890792  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:32.086783  960722 request.go:629] Waited for 195.361491ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:32.086854  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786
	I0314 18:21:32.086860  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.086870  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.086876  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.090475  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:32.091089  960722 pod_ready.go:92] pod "kube-scheduler-ha-105786" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:32.091111  960722 pod_ready.go:81] duration metric: took 398.998864ms for pod "kube-scheduler-ha-105786" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:32.091124  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:32.286267  960722 request.go:629] Waited for 195.058609ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m02
	I0314 18:21:32.286353  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m02
	I0314 18:21:32.286366  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.286373  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.286377  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.291224  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:32.486368  960722 request.go:629] Waited for 193.418218ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:32.486426  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m02
	I0314 18:21:32.486447  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.486468  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.486489  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.491230  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:32.491931  960722 pod_ready.go:92] pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:32.491953  960722 pod_ready.go:81] duration metric: took 400.81441ms for pod "kube-scheduler-ha-105786-m02" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:32.491966  960722 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:32.685999  960722 request.go:629] Waited for 193.931338ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m03
	I0314 18:21:32.686072  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods/kube-scheduler-ha-105786-m03
	I0314 18:21:32.686081  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.686095  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.686104  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.693447  960722 round_trippers.go:574] Response Status: 200 OK in 7 milliseconds
	I0314 18:21:32.886470  960722 request.go:629] Waited for 192.192425ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:32.886582  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes/ha-105786-m03
	I0314 18:21:32.886595  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.886605  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.886613  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.890459  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:32.891038  960722 pod_ready.go:92] pod "kube-scheduler-ha-105786-m03" in "kube-system" namespace has status "Ready":"True"
	I0314 18:21:32.891063  960722 pod_ready.go:81] duration metric: took 399.089136ms for pod "kube-scheduler-ha-105786-m03" in "kube-system" namespace to be "Ready" ...
	I0314 18:21:32.891078  960722 pod_ready.go:38] duration metric: took 12.800064442s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 18:21:32.891095  960722 api_server.go:52] waiting for apiserver process to appear ...
	I0314 18:21:32.891160  960722 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:21:32.910084  960722 api_server.go:72] duration metric: took 17.690361984s to wait for apiserver process to appear ...
	I0314 18:21:32.910107  960722 api_server.go:88] waiting for apiserver healthz status ...
	I0314 18:21:32.910130  960722 api_server.go:253] Checking apiserver healthz at https://192.168.39.170:8443/healthz ...
	I0314 18:21:32.919423  960722 api_server.go:279] https://192.168.39.170:8443/healthz returned 200:
	ok
	I0314 18:21:32.919494  960722 round_trippers.go:463] GET https://192.168.39.170:8443/version
	I0314 18:21:32.919499  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:32.919507  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:32.919511  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:32.920660  960722 round_trippers.go:574] Response Status: 200 OK in 1 milliseconds
	I0314 18:21:32.920872  960722 api_server.go:141] control plane version: v1.28.4
	I0314 18:21:32.920895  960722 api_server.go:131] duration metric: took 10.779673ms to wait for apiserver health ...
	I0314 18:21:32.920906  960722 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 18:21:33.086608  960722 request.go:629] Waited for 165.598091ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:21:33.086674  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:21:33.086681  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:33.086698  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:33.086710  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:33.094732  960722 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:21:33.104548  960722 system_pods.go:59] 24 kube-system pods found
	I0314 18:21:33.104578  960722 system_pods.go:61] "coredns-5dd5756b68-cx8rc" [d2e960de-67a9-4385-ba02-78a744602bcc] Running
	I0314 18:21:33.104583  960722 system_pods.go:61] "coredns-5dd5756b68-jsddl" [bdbdea16-97b0-4581-8bab-9a472af11004] Running
	I0314 18:21:33.104587  960722 system_pods.go:61] "etcd-ha-105786" [a11d4142-2853-45c8-9433-c4a5d54fc66c] Running
	I0314 18:21:33.104590  960722 system_pods.go:61] "etcd-ha-105786-m02" [5bce83df-04d4-476b-8869-746c5900563e] Running
	I0314 18:21:33.104593  960722 system_pods.go:61] "etcd-ha-105786-m03" [aaa4af53-3ee2-484a-8067-80dffaefd8ea] Running
	I0314 18:21:33.104596  960722 system_pods.go:61] "kindnet-9b2pr" [e23e9c49-0b7d-46ca-ae62-11e9b26a1280] Running
	I0314 18:21:33.104599  960722 system_pods.go:61] "kindnet-gmvl5" [a64e3967-f28a-4fdb-a5ee-da05c6aba46a] Running
	I0314 18:21:33.104602  960722 system_pods.go:61] "kindnet-vpgvl" [fcd2b2f2-848f-408e-8d28-fb54cf623210] Running
	I0314 18:21:33.104604  960722 system_pods.go:61] "kube-apiserver-ha-105786" [f8058168-3736-4279-9fa7-f4878d8361e1] Running
	I0314 18:21:33.104607  960722 system_pods.go:61] "kube-apiserver-ha-105786-m02" [5f8798ed-58cc-45c8-a83c-217d99d40769] Running
	I0314 18:21:33.104610  960722 system_pods.go:61] "kube-apiserver-ha-105786-m03" [1a142787-c591-472a-8e85-dbe976383bff] Running
	I0314 18:21:33.104614  960722 system_pods.go:61] "kube-controller-manager-ha-105786" [0a5ec36c-be15-4649-a8f8-1dd9c5a0b87b] Running
	I0314 18:21:33.104620  960722 system_pods.go:61] "kube-controller-manager-ha-105786-m02" [d6fe9aea-613c-4bd8-8cb4-e3ac732feaa9] Running
	I0314 18:21:33.104623  960722 system_pods.go:61] "kube-controller-manager-ha-105786-m03" [332be1f1-48d4-425a-beda-44fe074bac93] Running
	I0314 18:21:33.104626  960722 system_pods.go:61] "kube-proxy-6rjsv" [6e2b5963-5c97-4f70-999a-f01ad58822fc] Running
	I0314 18:21:33.104629  960722 system_pods.go:61] "kube-proxy-hd8mx" [3e003f67-93dd-4105-a7bd-68d9af563ea4] Running
	I0314 18:21:33.104632  960722 system_pods.go:61] "kube-proxy-qpz89" [ca6a156c-9589-4200-bcfe-1537251ac9e2] Running
	I0314 18:21:33.104635  960722 system_pods.go:61] "kube-scheduler-ha-105786" [032b8718-b475-4155-b83b-1d065123f53f] Running
	I0314 18:21:33.104638  960722 system_pods.go:61] "kube-scheduler-ha-105786-m02" [f397b74c-e61a-498b-807b-002474ce63b2] Running
	I0314 18:21:33.104644  960722 system_pods.go:61] "kube-scheduler-ha-105786-m03" [2bc9adb8-e64d-4b60-a392-5fa73d89b365] Running
	I0314 18:21:33.104651  960722 system_pods.go:61] "kube-vip-ha-105786" [b310c7b3-e9d7-4f98-8df8-fdfb9f7754f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.104658  960722 system_pods.go:61] "kube-vip-ha-105786-m02" [5caee046-92ea-4315-b240-84ce4553d64e] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.104667  960722 system_pods.go:61] "kube-vip-ha-105786-m03" [272b8465-c012-4e94-8142-2db4d20fd844] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.104674  960722 system_pods.go:61] "storage-provisioner" [566fc43f-5610-4dcd-b683-1cc87e6ed609] Running
	I0314 18:21:33.104681  960722 system_pods.go:74] duration metric: took 183.768694ms to wait for pod list to return data ...
	I0314 18:21:33.104691  960722 default_sa.go:34] waiting for default service account to be created ...
	I0314 18:21:33.286741  960722 request.go:629] Waited for 181.965323ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:21:33.286798  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/default/serviceaccounts
	I0314 18:21:33.286803  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:33.286811  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:33.286816  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:33.291294  960722 round_trippers.go:574] Response Status: 200 OK in 4 milliseconds
	I0314 18:21:33.291440  960722 default_sa.go:45] found service account: "default"
	I0314 18:21:33.291455  960722 default_sa.go:55] duration metric: took 186.757524ms for default service account to be created ...
	I0314 18:21:33.291465  960722 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 18:21:33.485973  960722 request.go:629] Waited for 194.419337ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:21:33.486045  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/namespaces/kube-system/pods
	I0314 18:21:33.486050  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:33.486061  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:33.486066  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:33.494649  960722 round_trippers.go:574] Response Status: 200 OK in 8 milliseconds
	I0314 18:21:33.502794  960722 system_pods.go:86] 24 kube-system pods found
	I0314 18:21:33.502820  960722 system_pods.go:89] "coredns-5dd5756b68-cx8rc" [d2e960de-67a9-4385-ba02-78a744602bcc] Running
	I0314 18:21:33.502826  960722 system_pods.go:89] "coredns-5dd5756b68-jsddl" [bdbdea16-97b0-4581-8bab-9a472af11004] Running
	I0314 18:21:33.502831  960722 system_pods.go:89] "etcd-ha-105786" [a11d4142-2853-45c8-9433-c4a5d54fc66c] Running
	I0314 18:21:33.502834  960722 system_pods.go:89] "etcd-ha-105786-m02" [5bce83df-04d4-476b-8869-746c5900563e] Running
	I0314 18:21:33.502839  960722 system_pods.go:89] "etcd-ha-105786-m03" [aaa4af53-3ee2-484a-8067-80dffaefd8ea] Running
	I0314 18:21:33.502845  960722 system_pods.go:89] "kindnet-9b2pr" [e23e9c49-0b7d-46ca-ae62-11e9b26a1280] Running
	I0314 18:21:33.502851  960722 system_pods.go:89] "kindnet-gmvl5" [a64e3967-f28a-4fdb-a5ee-da05c6aba46a] Running
	I0314 18:21:33.502857  960722 system_pods.go:89] "kindnet-vpgvl" [fcd2b2f2-848f-408e-8d28-fb54cf623210] Running
	I0314 18:21:33.502867  960722 system_pods.go:89] "kube-apiserver-ha-105786" [f8058168-3736-4279-9fa7-f4878d8361e1] Running
	I0314 18:21:33.502875  960722 system_pods.go:89] "kube-apiserver-ha-105786-m02" [5f8798ed-58cc-45c8-a83c-217d99d40769] Running
	I0314 18:21:33.502884  960722 system_pods.go:89] "kube-apiserver-ha-105786-m03" [1a142787-c591-472a-8e85-dbe976383bff] Running
	I0314 18:21:33.502893  960722 system_pods.go:89] "kube-controller-manager-ha-105786" [0a5ec36c-be15-4649-a8f8-1dd9c5a0b87b] Running
	I0314 18:21:33.502900  960722 system_pods.go:89] "kube-controller-manager-ha-105786-m02" [d6fe9aea-613c-4bd8-8cb4-e3ac732feaa9] Running
	I0314 18:21:33.502905  960722 system_pods.go:89] "kube-controller-manager-ha-105786-m03" [332be1f1-48d4-425a-beda-44fe074bac93] Running
	I0314 18:21:33.502911  960722 system_pods.go:89] "kube-proxy-6rjsv" [6e2b5963-5c97-4f70-999a-f01ad58822fc] Running
	I0314 18:21:33.502916  960722 system_pods.go:89] "kube-proxy-hd8mx" [3e003f67-93dd-4105-a7bd-68d9af563ea4] Running
	I0314 18:21:33.502924  960722 system_pods.go:89] "kube-proxy-qpz89" [ca6a156c-9589-4200-bcfe-1537251ac9e2] Running
	I0314 18:21:33.502931  960722 system_pods.go:89] "kube-scheduler-ha-105786" [032b8718-b475-4155-b83b-1d065123f53f] Running
	I0314 18:21:33.502937  960722 system_pods.go:89] "kube-scheduler-ha-105786-m02" [f397b74c-e61a-498b-807b-002474ce63b2] Running
	I0314 18:21:33.502947  960722 system_pods.go:89] "kube-scheduler-ha-105786-m03" [2bc9adb8-e64d-4b60-a392-5fa73d89b365] Running
	I0314 18:21:33.502961  960722 system_pods.go:89] "kube-vip-ha-105786" [b310c7b3-e9d7-4f98-8df8-fdfb9f7754f8] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.502975  960722 system_pods.go:89] "kube-vip-ha-105786-m02" [5caee046-92ea-4315-b240-84ce4553d64e] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.502990  960722 system_pods.go:89] "kube-vip-ha-105786-m03" [272b8465-c012-4e94-8142-2db4d20fd844] Running / Ready:ContainersNotReady (containers with unready status: [kube-vip]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-vip])
	I0314 18:21:33.502998  960722 system_pods.go:89] "storage-provisioner" [566fc43f-5610-4dcd-b683-1cc87e6ed609] Running
	I0314 18:21:33.503006  960722 system_pods.go:126] duration metric: took 211.531286ms to wait for k8s-apps to be running ...
	I0314 18:21:33.503016  960722 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 18:21:33.503076  960722 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:21:33.521843  960722 system_svc.go:56] duration metric: took 18.815526ms WaitForService to wait for kubelet
	I0314 18:21:33.521874  960722 kubeadm.go:576] duration metric: took 18.302157253s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:21:33.521894  960722 node_conditions.go:102] verifying NodePressure condition ...
	I0314 18:21:33.686250  960722 request.go:629] Waited for 164.272977ms due to client-side throttling, not priority and fairness, request: GET:https://192.168.39.170:8443/api/v1/nodes
	I0314 18:21:33.686344  960722 round_trippers.go:463] GET https://192.168.39.170:8443/api/v1/nodes
	I0314 18:21:33.686353  960722 round_trippers.go:469] Request Headers:
	I0314 18:21:33.686366  960722 round_trippers.go:473]     Accept: application/json, */*
	I0314 18:21:33.686377  960722 round_trippers.go:473]     User-Agent: minikube-linux-amd64/v0.0.0 (linux/amd64) kubernetes/$Format
	I0314 18:21:33.690388  960722 round_trippers.go:574] Response Status: 200 OK in 3 milliseconds
	I0314 18:21:33.691354  960722 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:21:33.691373  960722 node_conditions.go:123] node cpu capacity is 2
	I0314 18:21:33.691385  960722 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:21:33.691388  960722 node_conditions.go:123] node cpu capacity is 2
	I0314 18:21:33.691392  960722 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 18:21:33.691394  960722 node_conditions.go:123] node cpu capacity is 2
	I0314 18:21:33.691398  960722 node_conditions.go:105] duration metric: took 169.499624ms to run NodePressure ...
	I0314 18:21:33.691410  960722 start.go:240] waiting for startup goroutines ...
	I0314 18:21:33.691429  960722 start.go:254] writing updated cluster config ...
	I0314 18:21:33.691715  960722 ssh_runner.go:195] Run: rm -f paused
	I0314 18:21:33.746934  960722 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 18:21:33.749619  960722 out.go:177] * Done! kubectl is now configured to use "ha-105786" cluster and "default" namespace by default
	
	
	==> CRI-O <==
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.163832172Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710440769163573485,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=849da532-7f6c-4c0f-91b4-757a202c56a3 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.165476098Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8e7e5163-e65a-4e2c-a621-74fc89db17bf name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.165535231Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8e7e5163-e65a-4e2c-a621-74fc89db17bf name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.166027402Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675,PodSandboxId:4ae35dda855682eeec7f084a8d99d1010b7179f7087ba308f7d247c1227273f8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440686319282766,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710440496387656125,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346673813878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346688507665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9012775f0e5a49216f3113a2768c6e706c86414b5a38c260ff1116270240082d,PodSandboxId:a9eac200df7fd34b55b63f683744daafbe18d969cc4358690346eadfe9ab91a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710440345547366091,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d,PodSandboxId:eebe65b95c46c73a769c55c952fec8c3f055e0dddb1f978ebccf161efd718342,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710440344039
031842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440342438467070,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440320770654871,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183,PodSandboxId:6cdd0dfbe22b09182df63156e8bf9f125eb3496d1621ccafae99e452d62e68dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710440320670830153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922,PodSandboxId:70a5a4aad21293218245a0bdcfcf2db4af26ed9ea1f2a700cdfe59582cd21157,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710440320597037665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernet
es.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440320556245114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8e7e5163-e65a-4e2c-a621-74fc89db17bf name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.224127937Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=011a1bf3-dc7d-4409-bcad-ff9d84eb1565 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.224429529Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=011a1bf3-dc7d-4409-bcad-ff9d84eb1565 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.227615154Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=3ccac6fb-063b-4997-86a0-07a263ba9f0f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.228407324Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710440769228373800,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=3ccac6fb-063b-4997-86a0-07a263ba9f0f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.229178031Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1dfd64fe-8136-4311-9485-6e3b6e124adf name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.229230408Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1dfd64fe-8136-4311-9485-6e3b6e124adf name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.229500805Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675,PodSandboxId:4ae35dda855682eeec7f084a8d99d1010b7179f7087ba308f7d247c1227273f8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440686319282766,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710440496387656125,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346673813878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346688507665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9012775f0e5a49216f3113a2768c6e706c86414b5a38c260ff1116270240082d,PodSandboxId:a9eac200df7fd34b55b63f683744daafbe18d969cc4358690346eadfe9ab91a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710440345547366091,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d,PodSandboxId:eebe65b95c46c73a769c55c952fec8c3f055e0dddb1f978ebccf161efd718342,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710440344039
031842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440342438467070,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440320770654871,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183,PodSandboxId:6cdd0dfbe22b09182df63156e8bf9f125eb3496d1621ccafae99e452d62e68dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710440320670830153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922,PodSandboxId:70a5a4aad21293218245a0bdcfcf2db4af26ed9ea1f2a700cdfe59582cd21157,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710440320597037665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernet
es.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440320556245114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=1dfd64fe-8136-4311-9485-6e3b6e124adf name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.279605682Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=679e9d03-85e4-4cba-aa8a-ee0f0b5848fd name=/runtime.v1.RuntimeService/Version
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.279789173Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=679e9d03-85e4-4cba-aa8a-ee0f0b5848fd name=/runtime.v1.RuntimeService/Version
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.281872631Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2011c988-28c2-45e5-a4fa-81374f8806d8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.282498491Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710440769282474697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2011c988-28c2-45e5-a4fa-81374f8806d8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.283137708Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=d1eedab4-488b-4154-8ed5-3756a0e9d67a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.283190000Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=d1eedab4-488b-4154-8ed5-3756a0e9d67a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.283431701Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675,PodSandboxId:4ae35dda855682eeec7f084a8d99d1010b7179f7087ba308f7d247c1227273f8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440686319282766,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710440496387656125,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346673813878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346688507665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9012775f0e5a49216f3113a2768c6e706c86414b5a38c260ff1116270240082d,PodSandboxId:a9eac200df7fd34b55b63f683744daafbe18d969cc4358690346eadfe9ab91a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710440345547366091,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d,PodSandboxId:eebe65b95c46c73a769c55c952fec8c3f055e0dddb1f978ebccf161efd718342,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710440344039
031842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440342438467070,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440320770654871,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183,PodSandboxId:6cdd0dfbe22b09182df63156e8bf9f125eb3496d1621ccafae99e452d62e68dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710440320670830153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922,PodSandboxId:70a5a4aad21293218245a0bdcfcf2db4af26ed9ea1f2a700cdfe59582cd21157,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710440320597037665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernet
es.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440320556245114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=d1eedab4-488b-4154-8ed5-3756a0e9d67a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.326137835Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=939f3360-5cac-4843-acd9-a4d1f61025a6 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.326208134Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=939f3360-5cac-4843-acd9-a4d1f61025a6 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.328409589Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9aa010f8-0c81-4a66-8a32-f581230d9f3c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.329154895Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710440769329128474,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9aa010f8-0c81-4a66-8a32-f581230d9f3c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.330200261Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=77b914c8-0165-4258-9513-35e525638692 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.330258751Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=77b914c8-0165-4258-9513-35e525638692 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:26:09 ha-105786 crio[676]: time="2024-03-14 18:26:09.330571577Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675,PodSandboxId:4ae35dda855682eeec7f084a8d99d1010b7179f7087ba308f7d247c1227273f8,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:7,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440686319282766,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 7,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710440496387656125,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/t
ermination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346673813878,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containe
rPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440346688507665,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,}
,Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9012775f0e5a49216f3113a2768c6e706c86414b5a38c260ff1116270240082d,PodSandboxId:a9eac200df7fd34b55b63f683744daafbe18d969cc4358690346eadfe9ab91a4,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710440345547366091,Labels:
map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d,PodSandboxId:eebe65b95c46c73a769c55c952fec8c3f055e0dddb1f978ebccf161efd718342,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710440344039
031842,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440342438467070,Labels:map[string]string{io.
kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440320770654871,Labels:map[string]string{io.kubernetes.container.name: ku
be-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183,PodSandboxId:6cdd0dfbe22b09182df63156e8bf9f125eb3496d1621ccafae99e452d62e68dd,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710440320670830153,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kuber
netes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922,PodSandboxId:70a5a4aad21293218245a0bdcfcf2db4af26ed9ea1f2a700cdfe59582cd21157,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710440320597037665,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernet
es.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440320556245114,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.k
ubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=77b914c8-0165-4258-9513-35e525638692 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	ff5ec432d711f       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      About a minute ago   Exited              kube-vip                  7                   4ae35dda85568       kube-vip-ha-105786
	522fa7bdb84ee       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   4 minutes ago        Running             busybox                   0                   c09b6e29d418a       busybox-5b5d89c9d6-4h99c
	b538852248364       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago        Running             coredns                   0                   c6041a600821e       coredns-5dd5756b68-cx8rc
	4fbdd8b34ac46       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago        Running             coredns                   0                   880e93f2a3ed5       coredns-5dd5756b68-jsddl
	9012775f0e5a4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Running             storage-provisioner       0                   a9eac200df7fd       storage-provisioner
	fa5c51367cb91       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago        Running             kindnet-cni               0                   eebe65b95c46c       kindnet-9b2pr
	50a3dcdc83e53       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago        Running             kube-proxy                0                   6d43c44b3e99b       kube-proxy-hd8mx
	3f27ba9bd31a4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago        Running             kube-scheduler            0                   ed2bf5bc80b8e       kube-scheduler-ha-105786
	dd5f374c12463       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago        Running             kube-apiserver            0                   6cdd0dfbe22b0       kube-apiserver-ha-105786
	ee804d488d0b1       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago        Running             kube-controller-manager   0                   70a5a4aad2129       kube-controller-manager-ha-105786
	ff7528019bad0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago        Running             etcd                      0                   c3fe1175987df       etcd-ha-105786
	
	
	==> coredns [4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817] <==
	[INFO] 10.244.1.2:41895 - 4 "A IN kubernetes.io. udp 31 false 512" NOERROR qr,aa,rd,ra 60 0.000096424s
	[INFO] 10.244.0.4:59440 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000256482s
	[INFO] 10.244.0.4:45605 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000227942s
	[INFO] 10.244.0.4:50087 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000117895s
	[INFO] 10.244.2.2:47342 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000135824s
	[INFO] 10.244.2.2:51729 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001908531s
	[INFO] 10.244.2.2:45347 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000168024s
	[INFO] 10.244.2.2:47143 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001299211s
	[INFO] 10.244.2.2:60120 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000074392s
	[INFO] 10.244.2.2:54628 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000165918s
	[INFO] 10.244.1.2:41886 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000144702s
	[INFO] 10.244.1.2:39387 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001194423s
	[INFO] 10.244.1.2:54465 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.00023379s
	[INFO] 10.244.1.2:58623 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000124815s
	[INFO] 10.244.0.4:59741 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000067152s
	[INFO] 10.244.0.4:39798 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049996s
	[INFO] 10.244.0.4:39218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088514s
	[INFO] 10.244.2.2:53227 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129249s
	[INFO] 10.244.1.2:38289 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162454s
	[INFO] 10.244.1.2:39880 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157216s
	[INFO] 10.244.0.4:40457 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166755s
	[INFO] 10.244.0.4:47654 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165231s
	[INFO] 10.244.2.2:56922 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021872s
	[INFO] 10.244.2.2:55729 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082709s
	[INFO] 10.244.2.2:40076 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091316s
	
	
	==> coredns [b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b] <==
	[INFO] 10.244.1.2:32813 - 5 "PTR IN 148.40.75.147.in-addr.arpa. udp 44 false 512" NXDOMAIN qr,rd,ra 44 0.002143596s
	[INFO] 10.244.0.4:47271 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000163596s
	[INFO] 10.244.0.4:48154 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003476861s
	[INFO] 10.244.0.4:49667 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000158457s
	[INFO] 10.244.0.4:33929 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.003764421s
	[INFO] 10.244.0.4:52979 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000155659s
	[INFO] 10.244.2.2:39342 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000199098s
	[INFO] 10.244.2.2:41642 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000230102s
	[INFO] 10.244.1.2:54390 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000186353s
	[INFO] 10.244.1.2:53664 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001692621s
	[INFO] 10.244.1.2:43695 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000092508s
	[INFO] 10.244.1.2:46229 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129166s
	[INFO] 10.244.0.4:59002 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000104268s
	[INFO] 10.244.2.2:40041 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000190606s
	[INFO] 10.244.2.2:40444 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000113686s
	[INFO] 10.244.2.2:35209 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000219373s
	[INFO] 10.244.1.2:37537 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135802s
	[INFO] 10.244.1.2:50389 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105335s
	[INFO] 10.244.0.4:53486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200332s
	[INFO] 10.244.0.4:53550 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000308188s
	[INFO] 10.244.2.2:59521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134191s
	[INFO] 10.244.1.2:43514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127501s
	[INFO] 10.244.1.2:54638 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089684s
	[INFO] 10.244.1.2:43811 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187377s
	[INFO] 10.244.1.2:38538 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164864s
	
	
	==> describe nodes <==
	Name:               ha-105786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_18_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:26:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:21:54 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:21:54 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:21:54 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:21:54 +0000   Thu, 14 Mar 2024 18:19:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-105786
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 83805f81be844e0c8f423f0d34e721b6
	  System UUID:                83805f81-be84-4e0c-8f42-3f0d34e721b6
	  Boot ID:                    592e9c66-43d6-494c-b6d9-c848f3c684fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4h99c             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 coredns-5dd5756b68-cx8rc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m9s
	  kube-system                 coredns-5dd5756b68-jsddl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m9s
	  kube-system                 etcd-ha-105786                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m21s
	  kube-system                 kindnet-9b2pr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m9s
	  kube-system                 kube-apiserver-ha-105786             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-controller-manager-ha-105786    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 kube-proxy-hd8mx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m9s
	  kube-system                 kube-scheduler-ha-105786             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m19s
	  kube-system                 kube-vip-ha-105786                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m21s
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m8s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 7m6s                   kube-proxy       
	  Normal  Starting                 7m30s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  7m30s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientPID     7m29s (x7 over 7m30s)  kubelet          Node ha-105786 status is now: NodeHasSufficientPID
	  Normal  NodeHasSufficientMemory  7m29s (x8 over 7m30s)  kubelet          Node ha-105786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m29s (x8 over 7m30s)  kubelet          Node ha-105786 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 7m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  7m19s                  kubelet          Node ha-105786 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m19s                  kubelet          Node ha-105786 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m19s                  kubelet          Node ha-105786 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  7m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           7m9s                   node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal  NodeReady                7m4s                   kubelet          Node ha-105786 status is now: NodeReady
	  Normal  RegisteredNode           5m51s                  node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal  RegisteredNode           4m40s                  node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	
	
	Name:               ha-105786-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_20_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:19:53 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:22:40 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 14 Mar 2024 18:21:59 +0000   Thu, 14 Mar 2024 18:23:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 14 Mar 2024 18:21:59 +0000   Thu, 14 Mar 2024 18:23:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 14 Mar 2024 18:21:59 +0000   Thu, 14 Mar 2024 18:23:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 14 Mar 2024 18:21:59 +0000   Thu, 14 Mar 2024 18:23:24 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-105786-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d19ca741ee10483194e2397e40db9727
	  System UUID:                d19ca741-ee10-4831-94e2-397e40db9727
	  Boot ID:                    5733374b-8d82-4a03-be20-977a16629e81
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-k6gxp                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 etcd-ha-105786-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         6m8s
	  kube-system                 kindnet-vpgvl                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m16s
	  kube-system                 kube-apiserver-ha-105786-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m54s
	  kube-system                 kube-controller-manager-ha-105786-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m
	  kube-system                 kube-proxy-qpz89                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m16s
	  kube-system                 kube-scheduler-ha-105786-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	  kube-system                 kube-vip-ha-105786-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         5m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        6m5s   kube-proxy       
	  Normal  RegisteredNode  5m51s  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  RegisteredNode  4m40s  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  NodeNotReady    2m45s  node-controller  Node ha-105786-m02 status is now: NodeNotReady
	
	
	Name:               ha-105786-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_21_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:21:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:26:09 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:21:41 +0000   Thu, 14 Mar 2024 18:21:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:21:41 +0000   Thu, 14 Mar 2024 18:21:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:21:41 +0000   Thu, 14 Mar 2024 18:21:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:21:41 +0000   Thu, 14 Mar 2024 18:21:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    ha-105786-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 52afc1e0396540bd95588eb0b4583ac2
	  System UUID:                52afc1e0-3965-40bd-9558-8eb0b4583ac2
	  Boot ID:                    49a72d33-484d-4235-87f0-aa586a313300
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-g4zv5                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m35s
	  kube-system                 etcd-ha-105786-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         4m58s
	  kube-system                 kindnet-gmvl5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      4m59s
	  kube-system                 kube-apiserver-ha-105786-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m58s
	  kube-system                 kube-controller-manager-ha-105786-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m51s
	  kube-system                 kube-proxy-6rjsv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m59s
	  kube-system                 kube-scheduler-ha-105786-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	  kube-system                 kube-vip-ha-105786-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         4m56s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason          Age    From             Message
	  ----    ------          ----   ----             -------
	  Normal  Starting        4m56s  kube-proxy       
	  Normal  RegisteredNode  4m56s  node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	  Normal  RegisteredNode  4m54s  node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	  Normal  RegisteredNode  4m40s  node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	
	
	Name:               ha-105786-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_22_12_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:26:04 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:24:24 +0000   Thu, 14 Mar 2024 18:24:24 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:24:24 +0000   Thu, 14 Mar 2024 18:24:24 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:24:24 +0000   Thu, 14 Mar 2024 18:24:24 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:24:24 +0000   Thu, 14 Mar 2024 18:24:24 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    ha-105786-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e09570d6bc045a59dfec434fd490a91
	  System UUID:                7e09570d-6bc0-45a5-9dfe-c434fd490a91
	  Boot ID:                    428a1228-6299-4b38-9548-9e1cc3a10d5e
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fzjdr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      3m57s
	  kube-system                 kube-proxy-bftws    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m57s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                   From             Message
	  ----    ------                   ----                  ----             -------
	  Normal  Starting                 3m53s                 kube-proxy       
	  Normal  RegisteredNode           3m56s                 node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal  RegisteredNode           3m55s                 node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal  RegisteredNode           3m54s                 node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal  NodeNotReady             2m55s                 node-controller  Node ha-105786-m04 status is now: NodeNotReady
	  Normal  NodeHasSufficientMemory  105s (x6 over 3m59s)  kubelet          Node ha-105786-m04 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    105s (x6 over 3m59s)  kubelet          Node ha-105786-m04 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     105s (x6 over 3m59s)  kubelet          Node ha-105786-m04 status is now: NodeHasSufficientPID
	  Normal  NodeReady                105s (x2 over 3m50s)  kubelet          Node ha-105786-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[Mar14 18:18] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.051627] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042750] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.571119] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.520173] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +4.684873] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000013] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.323219] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.062088] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057119] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.191786] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.127293] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.261879] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +5.345908] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.065032] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.795309] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.848496] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.157868] kauditd_printk_skb: 51 callbacks suppressed
	[  +2.914153] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[Mar14 18:19] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.437406] kauditd_printk_skb: 73 callbacks suppressed
	
	
	==> etcd [ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc] <==
	{"level":"warn","ts":"2024-03-14T18:26:09.497537Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.594989Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.633894Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.642277Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.647817Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.665556Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.674007Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.681417Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.687546Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.691265Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.695067Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.70533Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.711331Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.717487Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.721454Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.725521Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.733801Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.746432Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.756424Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.760022Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.764591Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.770881Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.777902Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.786198Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	{"level":"warn","ts":"2024-03-14T18:26:09.795786Z","caller":"rafthttp/peer.go:267","msg":"dropped internal Raft message since sending buffer is full (overloaded network)","message-type":"MsgHeartbeat","local-member-id":"6b385368e7357343","from":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4","remote-peer-name":"pipeline","remote-peer-active":false}
	
	
	==> kernel <==
	 18:26:09 up 8 min,  0 users,  load average: 0.29, 0.24, 0.13
	Linux ha-105786 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d] <==
	I0314 18:25:33.650337       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:25:43.667448       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:25:43.667508       1 main.go:227] handling current node
	I0314 18:25:43.667524       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:25:43.667534       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:25:43.667852       1 main.go:223] Handling node with IPs: map[192.168.39.190:{}]
	I0314 18:25:43.667896       1 main.go:250] Node ha-105786-m03 has CIDR [10.244.2.0/24] 
	I0314 18:25:43.667995       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:25:43.668035       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:25:53.683891       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:25:53.683988       1 main.go:227] handling current node
	I0314 18:25:53.684066       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:25:53.684104       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:25:53.684274       1 main.go:223] Handling node with IPs: map[192.168.39.190:{}]
	I0314 18:25:53.684297       1 main.go:250] Node ha-105786-m03 has CIDR [10.244.2.0/24] 
	I0314 18:25:53.684361       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:25:53.684379       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:26:03.692472       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:26:03.692517       1 main.go:227] handling current node
	I0314 18:26:03.692528       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:26:03.692540       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:26:03.692668       1 main.go:223] Handling node with IPs: map[192.168.39.190:{}]
	I0314 18:26:03.692762       1 main.go:250] Node ha-105786-m03 has CIDR [10.244.2.0/24] 
	I0314 18:26:03.692865       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:26:03.692895       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183] <==
	Trace[752158558]:  ---"Txn call succeeded" 6345ms (18:20:03.184)]
	Trace[752158558]: [6.347024177s] [6.347024177s] END
	I0314 18:20:03.186045       1 trace.go:236] Trace[1885140667]: "Update" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:fbc94115-6d0b-414d-ad84-c5eeaf26d5ea,client:192.168.39.254,protocol:HTTP/2.0,resource:leases,scope:resource,url:/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-105786,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:PUT (14-Mar-2024 18:20:01.502) (total time: 1683ms):
	Trace[1885140667]: ["GuaranteedUpdate etcd3" audit-id:fbc94115-6d0b-414d-ad84-c5eeaf26d5ea,key:/leases/kube-node-lease/ha-105786,type:*coordination.Lease,resource:leases.coordination.k8s.io 1683ms (18:20:01.502)
	Trace[1885140667]:  ---"Txn call completed" 1680ms (18:20:03.185)]
	Trace[1885140667]: [1.683699125s] [1.683699125s] END
	E0314 18:20:03.190572       1 controller.go:193] "Failed to update lease" err="Operation cannot be fulfilled on leases.coordination.k8s.io \"apiserver-jihrvibd4oxtendqa56r4cz4ky\": the object has been modified; please apply your changes to the latest version and try again"
	I0314 18:20:03.249348       1 trace.go:236] Trace[68449238]: "Create" accept:application/vnd.kubernetes.protobuf,application/json,audit-id:52a71337-9df0-4a51-bb09-73302b2702d3,client:192.168.39.245,protocol:HTTP/2.0,resource:pods,scope:resource,url:/api/v1/namespaces/kube-system/pods,user-agent:kubelet/v1.28.4 (linux/amd64) kubernetes/bae2c62,verb:POST (14-Mar-2024 18:20:01.641) (total time: 1608ms):
	Trace[68449238]: [1.608286428s] [1.608286428s] END
	E0314 18:20:33.381836       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0314 18:20:33.382313       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0314 18:20:33.383116       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0314 18:20:33.384105       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0314 18:20:33.384243       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 1.790229ms, panicked: false, err: context canceled, panic-reason: <nil>
	E0314 18:20:33.384279       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0314 18:20:33.384943       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0314 18:20:33.385436       1 timeout.go:142] post-timeout activity - time-elapsed: 3.035341ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ha-105786" result: <nil>
	E0314 18:20:33.385553       1 timeout.go:142] post-timeout activity - time-elapsed: 4.896713ms, GET "/api/v1/nodes/ha-105786" result: <nil>
	E0314 18:21:38.394893       1 upgradeaware.go:425] Error proxying data from client to backend: write tcp 192.168.39.170:52880->192.168.39.245:10250: write: broken pipe
	E0314 18:22:05.011203       1 finisher.go:175] FinishRequest: post-timeout activity - time-elapsed: 4.988µs, panicked: false, err: context canceled, panic-reason: <nil>
	E0314 18:22:05.011992       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	E0314 18:22:05.013237       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	E0314 18:22:05.013320       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0314 18:22:05.014570       1 timeout.go:142] post-timeout activity - time-elapsed: 2.578598ms, PUT "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result: <nil>
	W0314 18:22:55.888230       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.170 192.168.39.190]
	
	
	==> kube-controller-manager [ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922] <==
	I0314 18:21:35.048242       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-l2jg8"
	I0314 18:21:35.256340       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="364.425814ms"
	I0314 18:21:35.314007       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="57.587399ms"
	I0314 18:21:35.314328       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="142.141µs"
	I0314 18:21:36.549022       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="12.837461ms"
	I0314 18:21:36.550338       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="52.453µs"
	I0314 18:21:37.261244       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="43.412621ms"
	I0314 18:21:37.261342       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="43.654µs"
	I0314 18:21:37.412161       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="36.802386ms"
	I0314 18:21:37.412290       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.324µs"
	E0314 18:22:10.691761       1 certificate_controller.go:146] Sync csr-8njl4 failed with : error updating signature for csr: Operation cannot be fulfilled on certificatesigningrequests.certificates.k8s.io "csr-8njl4": the object has been modified; please apply your changes to the latest version and try again
	I0314 18:22:12.151077       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"ha-105786-m04\" does not exist"
	I0314 18:22:12.202408       1 range_allocator.go:380] "Set node PodCIDR" node="ha-105786-m04" podCIDRs=["10.244.3.0/24"]
	I0314 18:22:12.208056       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-fzjdr"
	I0314 18:22:12.208230       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-sx7t4"
	I0314 18:22:12.451342       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-jkwnt"
	I0314 18:22:12.466914       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kindnet-xpmtb"
	I0314 18:22:12.478959       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-sx7t4"
	I0314 18:22:12.501278       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: kube-proxy-jsmlk"
	I0314 18:22:15.331369       1 event.go:307] "Event occurred" object="ha-105786-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller"
	I0314 18:22:15.349015       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="ha-105786-m04"
	I0314 18:22:19.420413       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-105786-m04"
	I0314 18:23:24.830507       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="22.57665ms"
	I0314 18:23:24.830649       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="55.522µs"
	I0314 18:24:24.114447       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-105786-m04"
	
	
	==> kube-proxy [50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9] <==
	I0314 18:19:02.648031       1 server_others.go:69] "Using iptables proxy"
	I0314 18:19:02.664250       1 node.go:141] Successfully retrieved node IP: 192.168.39.170
	I0314 18:19:02.714117       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:19:02.714162       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:19:02.716766       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:19:02.717745       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:19:02.718063       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:19:02.718100       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:19:02.720154       1 config.go:188] "Starting service config controller"
	I0314 18:19:02.720408       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:19:02.720465       1 config.go:315] "Starting node config controller"
	I0314 18:19:02.720491       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:19:02.721511       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:19:02.721558       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:19:02.820597       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:19:02.820664       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:19:02.822840       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2] <==
	E0314 18:21:34.741225       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-k6gxp\": pod busybox-5b5d89c9d6-k6gxp is already assigned to node \"ha-105786-m02\"" pod="default/busybox-5b5d89c9d6-k6gxp"
	I0314 18:21:34.741262       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-k6gxp" node="ha-105786-m02"
	E0314 18:21:34.797062       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-4h99c\": pod busybox-5b5d89c9d6-4h99c is already assigned to node \"ha-105786\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-4h99c" node="ha-105786"
	E0314 18:21:34.797612       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-4h99c\": pod busybox-5b5d89c9d6-4h99c is already assigned to node \"ha-105786\"" pod="default/busybox-5b5d89c9d6-4h99c"
	I0314 18:21:34.797846       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-4h99c" node="ha-105786"
	E0314 18:21:34.797954       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-g4zv5\": pod busybox-5b5d89c9d6-g4zv5 is already assigned to node \"ha-105786-m03\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-g4zv5" node="ha-105786-m03"
	E0314 18:21:34.798050       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 5d3f92bb-be7f-4f0b-9815-0fa785ea455b(default/busybox-5b5d89c9d6-g4zv5) wasn't assumed so cannot be forgotten"
	E0314 18:21:34.798200       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-g4zv5\": pod busybox-5b5d89c9d6-g4zv5 is already assigned to node \"ha-105786-m03\"" pod="default/busybox-5b5d89c9d6-g4zv5"
	I0314 18:21:34.798284       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-g4zv5" node="ha-105786-m03"
	E0314 18:22:12.237093       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-sx7t4\": pod kube-proxy-sx7t4 is already assigned to node \"ha-105786-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-sx7t4" node="ha-105786-m04"
	E0314 18:22:12.237517       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod cb4e3e62-340a-4367-bc4a-d72b68f1082a(kube-system/kube-proxy-sx7t4) wasn't assumed so cannot be forgotten"
	E0314 18:22:12.237620       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-sx7t4\": pod kube-proxy-sx7t4 is already assigned to node \"ha-105786-m04\"" pod="kube-system/kube-proxy-sx7t4"
	I0314 18:22:12.237762       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-sx7t4" node="ha-105786-m04"
	E0314 18:22:12.368112       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kindnet-xpmtb\": pod kindnet-xpmtb is already assigned to node \"ha-105786-m04\"" plugin="DefaultBinder" pod="kube-system/kindnet-xpmtb" node="ha-105786-m04"
	E0314 18:22:12.369085       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod e6de9f42-83ee-4db1-b0bb-152b8104c199(kube-system/kindnet-xpmtb) wasn't assumed so cannot be forgotten"
	E0314 18:22:12.371466       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kindnet-xpmtb\": pod kindnet-xpmtb is already assigned to node \"ha-105786-m04\"" pod="kube-system/kindnet-xpmtb"
	I0314 18:22:12.371648       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kindnet-xpmtb" node="ha-105786-m04"
	E0314 18:22:12.368963       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-bftws\": pod kube-proxy-bftws is already assigned to node \"ha-105786-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-bftws" node="ha-105786-m04"
	E0314 18:22:12.372902       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 4dfc4fa6-ad4c-4ac7-8330-98bb674b95bc(kube-system/kube-proxy-bftws) wasn't assumed so cannot be forgotten"
	E0314 18:22:12.372957       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-bftws\": pod kube-proxy-bftws is already assigned to node \"ha-105786-m04\"" pod="kube-system/kube-proxy-bftws"
	I0314 18:22:12.373005       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-bftws" node="ha-105786-m04"
	E0314 18:22:12.434579       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"kube-proxy-jsmlk\": pod kube-proxy-jsmlk is already assigned to node \"ha-105786-m04\"" plugin="DefaultBinder" pod="kube-system/kube-proxy-jsmlk" node="ha-105786-m04"
	E0314 18:22:12.435013       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod f612bbff-e439-4f91-a45a-773f9f11c1b9(kube-system/kube-proxy-jsmlk) wasn't assumed so cannot be forgotten"
	E0314 18:22:12.435160       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"kube-proxy-jsmlk\": pod kube-proxy-jsmlk is already assigned to node \"ha-105786-m04\"" pod="kube-system/kube-proxy-jsmlk"
	I0314 18:22:12.435240       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="kube-system/kube-proxy-jsmlk" node="ha-105786-m04"
	
	
	==> kubelet <==
	Mar 14 18:24:32 ha-105786 kubelet[1439]: E0314 18:24:32.304633    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 2m40s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:24:46 ha-105786 kubelet[1439]: I0314 18:24:46.307273    1439 scope.go:117] "RemoveContainer" containerID="b2e92a5caf833aff399eab29d210e9040ecfa089f408e3eed3060c0d9c9a9e6a"
	Mar 14 18:24:50 ha-105786 kubelet[1439]: E0314 18:24:50.364258    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:24:50 ha-105786 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:24:50 ha-105786 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:24:50 ha-105786 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:24:50 ha-105786 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:24:53 ha-105786 kubelet[1439]: I0314 18:24:53.075825    1439 scope.go:117] "RemoveContainer" containerID="b2e92a5caf833aff399eab29d210e9040ecfa089f408e3eed3060c0d9c9a9e6a"
	Mar 14 18:24:53 ha-105786 kubelet[1439]: I0314 18:24:53.076189    1439 scope.go:117] "RemoveContainer" containerID="ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675"
	Mar 14 18:24:53 ha-105786 kubelet[1439]: E0314 18:24:53.076493    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:25:07 ha-105786 kubelet[1439]: I0314 18:25:07.304434    1439 scope.go:117] "RemoveContainer" containerID="ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675"
	Mar 14 18:25:07 ha-105786 kubelet[1439]: E0314 18:25:07.304802    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:25:21 ha-105786 kubelet[1439]: I0314 18:25:21.304234    1439 scope.go:117] "RemoveContainer" containerID="ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675"
	Mar 14 18:25:21 ha-105786 kubelet[1439]: E0314 18:25:21.304934    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:25:33 ha-105786 kubelet[1439]: I0314 18:25:33.303494    1439 scope.go:117] "RemoveContainer" containerID="ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675"
	Mar 14 18:25:33 ha-105786 kubelet[1439]: E0314 18:25:33.304065    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:25:46 ha-105786 kubelet[1439]: I0314 18:25:46.304327    1439 scope.go:117] "RemoveContainer" containerID="ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675"
	Mar 14 18:25:46 ha-105786 kubelet[1439]: E0314 18:25:46.304630    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:25:50 ha-105786 kubelet[1439]: E0314 18:25:50.360977    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:25:50 ha-105786 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:25:50 ha-105786 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:25:50 ha-105786 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:25:50 ha-105786 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:26:01 ha-105786 kubelet[1439]: I0314 18:26:01.303471    1439 scope.go:117] "RemoveContainer" containerID="ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675"
	Mar 14 18:26:01 ha-105786 kubelet[1439]: E0314 18:26:01.303897    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-105786 -n ha-105786
helpers_test.go:261: (dbg) Run:  kubectl --context ha-105786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/RestartSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/RestartSecondaryNode (55.68s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartClusterKeepsNodes (376.91s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartClusterKeepsNodes
ha_test.go:456: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-105786 -v=7 --alsologtostderr
ha_test.go:462: (dbg) Run:  out/minikube-linux-amd64 stop -p ha-105786 -v=7 --alsologtostderr
E0314 18:27:14.527608  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:27:14.854089  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
E0314 18:27:42.214431  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
ha_test.go:462: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p ha-105786 -v=7 --alsologtostderr: exit status 82 (2m2.735156728s)

                                                
                                                
-- stdout --
	* Stopping node "ha-105786-m04"  ...
	* Stopping node "ha-105786-m03"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:26:11.396456  966031 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:26:11.396730  966031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:26:11.396741  966031 out.go:304] Setting ErrFile to fd 2...
	I0314 18:26:11.396745  966031 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:26:11.396909  966031 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:26:11.397131  966031 out.go:298] Setting JSON to false
	I0314 18:26:11.397210  966031 mustload.go:65] Loading cluster: ha-105786
	I0314 18:26:11.397627  966031 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:26:11.397726  966031 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:26:11.397905  966031 mustload.go:65] Loading cluster: ha-105786
	I0314 18:26:11.398035  966031 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:26:11.398061  966031 stop.go:39] StopHost: ha-105786-m04
	I0314 18:26:11.398441  966031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:11.398495  966031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:11.415291  966031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36683
	I0314 18:26:11.415796  966031 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:11.416434  966031 main.go:141] libmachine: Using API Version  1
	I0314 18:26:11.416466  966031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:11.416843  966031 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:11.419317  966031 out.go:177] * Stopping node "ha-105786-m04"  ...
	I0314 18:26:11.420682  966031 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0314 18:26:11.420711  966031 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:26:11.420974  966031 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0314 18:26:11.421009  966031 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:26:11.423789  966031 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:26:11.424241  966031 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:26:11.424279  966031 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:26:11.424427  966031 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:26:11.424608  966031 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:26:11.424794  966031 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:26:11.424955  966031 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:26:11.516774  966031 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0314 18:26:11.571358  966031 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0314 18:26:11.629268  966031 main.go:141] libmachine: Stopping "ha-105786-m04"...
	I0314 18:26:11.629301  966031 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:26:11.630986  966031 main.go:141] libmachine: (ha-105786-m04) Calling .Stop
	I0314 18:26:11.634990  966031 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 0/120
	I0314 18:26:12.636833  966031 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 1/120
	I0314 18:26:13.639680  966031 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:26:13.641016  966031 main.go:141] libmachine: Machine "ha-105786-m04" was stopped.
	I0314 18:26:13.641038  966031 stop.go:75] duration metric: took 2.220359246s to stop
	I0314 18:26:13.641060  966031 stop.go:39] StopHost: ha-105786-m03
	I0314 18:26:13.641469  966031 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:26:13.641515  966031 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:26:13.657303  966031 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43189
	I0314 18:26:13.657742  966031 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:26:13.658238  966031 main.go:141] libmachine: Using API Version  1
	I0314 18:26:13.658260  966031 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:26:13.658662  966031 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:26:13.660952  966031 out.go:177] * Stopping node "ha-105786-m03"  ...
	I0314 18:26:13.662280  966031 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0314 18:26:13.662312  966031 main.go:141] libmachine: (ha-105786-m03) Calling .DriverName
	I0314 18:26:13.662557  966031 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0314 18:26:13.662579  966031 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHHostname
	I0314 18:26:13.665811  966031 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:26:13.666253  966031 main.go:141] libmachine: (ha-105786-m03) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:34:3f:75", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:20:32 +0000 UTC Type:0 Mac:52:54:00:34:3f:75 Iaid: IPaddr:192.168.39.190 Prefix:24 Hostname:ha-105786-m03 Clientid:01:52:54:00:34:3f:75}
	I0314 18:26:13.666282  966031 main.go:141] libmachine: (ha-105786-m03) DBG | domain ha-105786-m03 has defined IP address 192.168.39.190 and MAC address 52:54:00:34:3f:75 in network mk-ha-105786
	I0314 18:26:13.666409  966031 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHPort
	I0314 18:26:13.666601  966031 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHKeyPath
	I0314 18:26:13.666759  966031 main.go:141] libmachine: (ha-105786-m03) Calling .GetSSHUsername
	I0314 18:26:13.666899  966031 sshutil.go:53] new ssh client: &{IP:192.168.39.190 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m03/id_rsa Username:docker}
	I0314 18:26:13.752910  966031 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0314 18:26:13.807712  966031 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0314 18:26:13.863019  966031 main.go:141] libmachine: Stopping "ha-105786-m03"...
	I0314 18:26:13.863048  966031 main.go:141] libmachine: (ha-105786-m03) Calling .GetState
	I0314 18:26:13.864720  966031 main.go:141] libmachine: (ha-105786-m03) Calling .Stop
	I0314 18:26:13.868237  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 0/120
	I0314 18:26:14.869555  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 1/120
	I0314 18:26:15.870946  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 2/120
	I0314 18:26:16.872547  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 3/120
	I0314 18:26:17.874702  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 4/120
	I0314 18:26:18.876690  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 5/120
	I0314 18:26:19.878418  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 6/120
	I0314 18:26:20.879910  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 7/120
	I0314 18:26:21.881498  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 8/120
	I0314 18:26:22.883003  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 9/120
	I0314 18:26:23.884911  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 10/120
	I0314 18:26:24.886619  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 11/120
	I0314 18:26:25.888188  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 12/120
	I0314 18:26:26.889849  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 13/120
	I0314 18:26:27.891340  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 14/120
	I0314 18:26:28.892918  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 15/120
	I0314 18:26:29.894385  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 16/120
	I0314 18:26:30.895810  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 17/120
	I0314 18:26:31.897287  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 18/120
	I0314 18:26:32.899541  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 19/120
	I0314 18:26:33.901419  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 20/120
	I0314 18:26:34.903173  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 21/120
	I0314 18:26:35.904537  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 22/120
	I0314 18:26:36.906170  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 23/120
	I0314 18:26:37.907470  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 24/120
	I0314 18:26:38.909602  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 25/120
	I0314 18:26:39.911134  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 26/120
	I0314 18:26:40.912816  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 27/120
	I0314 18:26:41.914842  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 28/120
	I0314 18:26:42.916563  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 29/120
	I0314 18:26:43.919023  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 30/120
	I0314 18:26:44.920671  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 31/120
	I0314 18:26:45.922157  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 32/120
	I0314 18:26:46.923949  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 33/120
	I0314 18:26:47.925491  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 34/120
	I0314 18:26:48.927364  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 35/120
	I0314 18:26:49.928939  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 36/120
	I0314 18:26:50.930744  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 37/120
	I0314 18:26:51.932153  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 38/120
	I0314 18:26:52.933595  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 39/120
	I0314 18:26:53.935447  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 40/120
	I0314 18:26:54.936879  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 41/120
	I0314 18:26:55.938245  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 42/120
	I0314 18:26:56.939606  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 43/120
	I0314 18:26:57.941046  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 44/120
	I0314 18:26:58.942940  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 45/120
	I0314 18:26:59.944335  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 46/120
	I0314 18:27:00.945826  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 47/120
	I0314 18:27:01.947101  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 48/120
	I0314 18:27:02.948561  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 49/120
	I0314 18:27:03.950385  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 50/120
	I0314 18:27:04.951709  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 51/120
	I0314 18:27:05.953147  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 52/120
	I0314 18:27:06.954757  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 53/120
	I0314 18:27:07.956471  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 54/120
	I0314 18:27:08.958295  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 55/120
	I0314 18:27:09.959801  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 56/120
	I0314 18:27:10.961220  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 57/120
	I0314 18:27:11.962785  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 58/120
	I0314 18:27:12.964125  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 59/120
	I0314 18:27:13.965768  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 60/120
	I0314 18:27:14.967223  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 61/120
	I0314 18:27:15.968641  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 62/120
	I0314 18:27:16.970067  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 63/120
	I0314 18:27:17.971406  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 64/120
	I0314 18:27:18.972840  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 65/120
	I0314 18:27:19.974305  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 66/120
	I0314 18:27:20.975842  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 67/120
	I0314 18:27:21.977165  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 68/120
	I0314 18:27:22.978493  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 69/120
	I0314 18:27:23.980386  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 70/120
	I0314 18:27:24.981986  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 71/120
	I0314 18:27:25.983418  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 72/120
	I0314 18:27:26.984858  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 73/120
	I0314 18:27:27.986269  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 74/120
	I0314 18:27:28.987884  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 75/120
	I0314 18:27:29.989367  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 76/120
	I0314 18:27:30.990815  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 77/120
	I0314 18:27:31.992540  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 78/120
	I0314 18:27:32.993948  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 79/120
	I0314 18:27:33.995613  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 80/120
	I0314 18:27:34.997031  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 81/120
	I0314 18:27:35.998794  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 82/120
	I0314 18:27:37.000162  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 83/120
	I0314 18:27:38.001514  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 84/120
	I0314 18:27:39.002828  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 85/120
	I0314 18:27:40.004228  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 86/120
	I0314 18:27:41.005605  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 87/120
	I0314 18:27:42.006898  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 88/120
	I0314 18:27:43.008292  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 89/120
	I0314 18:27:44.009988  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 90/120
	I0314 18:27:45.011354  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 91/120
	I0314 18:27:46.012898  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 92/120
	I0314 18:27:47.014185  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 93/120
	I0314 18:27:48.015649  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 94/120
	I0314 18:27:49.017404  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 95/120
	I0314 18:27:50.019180  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 96/120
	I0314 18:27:51.020578  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 97/120
	I0314 18:27:52.021921  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 98/120
	I0314 18:27:53.023323  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 99/120
	I0314 18:27:54.025202  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 100/120
	I0314 18:27:55.026568  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 101/120
	I0314 18:27:56.027834  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 102/120
	I0314 18:27:57.029420  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 103/120
	I0314 18:27:58.030630  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 104/120
	I0314 18:27:59.032378  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 105/120
	I0314 18:28:00.033629  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 106/120
	I0314 18:28:01.034727  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 107/120
	I0314 18:28:02.036068  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 108/120
	I0314 18:28:03.037230  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 109/120
	I0314 18:28:04.039045  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 110/120
	I0314 18:28:05.040554  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 111/120
	I0314 18:28:06.041775  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 112/120
	I0314 18:28:07.043392  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 113/120
	I0314 18:28:08.044987  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 114/120
	I0314 18:28:09.046840  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 115/120
	I0314 18:28:10.048486  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 116/120
	I0314 18:28:11.049960  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 117/120
	I0314 18:28:12.051367  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 118/120
	I0314 18:28:13.052945  966031 main.go:141] libmachine: (ha-105786-m03) Waiting for machine to stop 119/120
	I0314 18:28:14.053520  966031 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0314 18:28:14.053648  966031 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0314 18:28:14.055780  966031 out.go:177] 
	W0314 18:28:14.057207  966031 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0314 18:28:14.057229  966031 out.go:239] * 
	* 
	W0314 18:28:14.069011  966031 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 18:28:14.070517  966031 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:464: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p ha-105786 -v=7 --alsologtostderr" : exit status 82
ha_test.go:467: (dbg) Run:  out/minikube-linux-amd64 start -p ha-105786 --wait=true -v=7 --alsologtostderr
E0314 18:32:14.528501  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:32:14.854319  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
ha_test.go:467: (dbg) Done: out/minikube-linux-amd64 start -p ha-105786 --wait=true -v=7 --alsologtostderr: (4m11.250794125s)
ha_test.go:472: (dbg) Run:  out/minikube-linux-amd64 node list -p ha-105786
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-105786 -n ha-105786
helpers_test.go:244: <<< TestMutliControlPlane/serial/RestartClusterKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/RestartClusterKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-105786 logs -n 25: (2.044331905s)
helpers_test.go:252: TestMutliControlPlane/serial/RestartClusterKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| cp      | ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m02:/home/docker/cp-test_ha-105786-m03_ha-105786-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m02 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04:/home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m04 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp testdata/cp-test.txt                                                | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile3116594682/001/cp-test_ha-105786-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786:/home/docker/cp-test_ha-105786-m04_ha-105786.txt                       |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786 sudo cat                                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786.txt                                 |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m02:/home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m02 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03:/home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m03 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-105786 node stop m02 -v=7                                                     | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-105786 node start m02 -v=7                                                    | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-105786 -v=7                                                           | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-105786 -v=7                                                                | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-105786 --wait=true -v=7                                                    | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:28 UTC | 14 Mar 24 18:32 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-105786                                                                | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:28:14
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:28:14.135581  966381 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:28:14.135809  966381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:28:14.135819  966381 out.go:304] Setting ErrFile to fd 2...
	I0314 18:28:14.135823  966381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:28:14.136017  966381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:28:14.136635  966381 out.go:298] Setting JSON to false
	I0314 18:28:14.137555  966381 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":94246,"bootTime":1710346648,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:28:14.137623  966381 start.go:139] virtualization: kvm guest
	I0314 18:28:14.140151  966381 out.go:177] * [ha-105786] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:28:14.141865  966381 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:28:14.143139  966381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:28:14.141950  966381 notify.go:220] Checking for updates...
	I0314 18:28:14.145606  966381 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:28:14.146880  966381 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:28:14.148172  966381 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:28:14.149433  966381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:28:14.151456  966381 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:28:14.151545  966381 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:28:14.151938  966381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:28:14.152008  966381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:28:14.167584  966381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33923
	I0314 18:28:14.167994  966381 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:28:14.168545  966381 main.go:141] libmachine: Using API Version  1
	I0314 18:28:14.168568  966381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:28:14.168939  966381 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:28:14.169128  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:28:14.204576  966381 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 18:28:14.205886  966381 start.go:297] selected driver: kvm2
	I0314 18:28:14.205904  966381 start.go:901] validating driver "kvm2" against &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:28:14.206073  966381 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:28:14.206444  966381 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:28:14.206518  966381 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 18:28:14.221472  966381 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 18:28:14.222163  966381 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:28:14.222234  966381 cni.go:84] Creating CNI manager for ""
	I0314 18:28:14.222247  966381 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0314 18:28:14.222320  966381 start.go:340] cluster config:
	{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:28:14.222495  966381 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:28:14.224385  966381 out.go:177] * Starting "ha-105786" primary control-plane node in "ha-105786" cluster
	I0314 18:28:14.225804  966381 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:28:14.225844  966381 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 18:28:14.225865  966381 cache.go:56] Caching tarball of preloaded images
	I0314 18:28:14.225953  966381 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:28:14.225968  966381 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:28:14.226092  966381 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:28:14.226284  966381 start.go:360] acquireMachinesLock for ha-105786: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:28:14.226328  966381 start.go:364] duration metric: took 24.57µs to acquireMachinesLock for "ha-105786"
	I0314 18:28:14.226352  966381 start.go:96] Skipping create...Using existing machine configuration
	I0314 18:28:14.226360  966381 fix.go:54] fixHost starting: 
	I0314 18:28:14.226657  966381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:28:14.226701  966381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:28:14.241136  966381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38977
	I0314 18:28:14.241530  966381 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:28:14.241978  966381 main.go:141] libmachine: Using API Version  1
	I0314 18:28:14.242009  966381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:28:14.242325  966381 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:28:14.242572  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:28:14.242703  966381 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:28:14.244394  966381 fix.go:112] recreateIfNeeded on ha-105786: state=Running err=<nil>
	W0314 18:28:14.244429  966381 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 18:28:14.246194  966381 out.go:177] * Updating the running kvm2 "ha-105786" VM ...
	I0314 18:28:14.247325  966381 machine.go:94] provisionDockerMachine start ...
	I0314 18:28:14.247347  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:28:14.247553  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.250303  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.250837  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.250860  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.251033  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.251208  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.251368  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.251526  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.251694  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.251915  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.251931  966381 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:28:14.358280  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786
	
	I0314 18:28:14.358312  966381 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:28:14.358557  966381 buildroot.go:166] provisioning hostname "ha-105786"
	I0314 18:28:14.358582  966381 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:28:14.358772  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.361558  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.362012  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.362038  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.362163  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.362358  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.362546  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.362671  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.362833  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.363062  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.363078  966381 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-105786 && echo "ha-105786" | sudo tee /etc/hostname
	I0314 18:28:14.484069  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786
	
	I0314 18:28:14.484119  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.487433  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.487941  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.487983  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.488155  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.488354  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.488522  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.488656  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.488852  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.489074  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.489104  966381 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-105786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-105786/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-105786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:28:14.594587  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:28:14.594624  966381 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:28:14.594655  966381 buildroot.go:174] setting up certificates
	I0314 18:28:14.594666  966381 provision.go:84] configureAuth start
	I0314 18:28:14.594676  966381 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:28:14.594943  966381 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:28:14.597572  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.598001  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.598034  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.598168  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.600676  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.601049  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.601074  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.601190  966381 provision.go:143] copyHostCerts
	I0314 18:28:14.601228  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:28:14.601277  966381 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 18:28:14.601287  966381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:28:14.601369  966381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:28:14.601478  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:28:14.601510  966381 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 18:28:14.601520  966381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:28:14.601557  966381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:28:14.601636  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:28:14.601666  966381 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 18:28:14.601679  966381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:28:14.601714  966381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:28:14.601795  966381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.ha-105786 san=[127.0.0.1 192.168.39.170 ha-105786 localhost minikube]
	I0314 18:28:14.793407  966381 provision.go:177] copyRemoteCerts
	I0314 18:28:14.793506  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:28:14.793541  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.796538  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.796966  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.796996  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.797178  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.797386  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.797602  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.797798  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:28:14.880507  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 18:28:14.880595  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:28:14.909434  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 18:28:14.909498  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0314 18:28:14.938304  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 18:28:14.938363  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 18:28:14.967874  966381 provision.go:87] duration metric: took 373.192835ms to configureAuth
	I0314 18:28:14.967907  966381 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:28:14.968172  966381 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:28:14.968305  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.970873  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.971322  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.971347  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.971576  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.971817  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.972020  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.972248  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.972453  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.972619  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.972633  966381 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:29:45.881390  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:29:45.881434  966381 machine.go:97] duration metric: took 1m31.634090272s to provisionDockerMachine
	I0314 18:29:45.881454  966381 start.go:293] postStartSetup for "ha-105786" (driver="kvm2")
	I0314 18:29:45.881471  966381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:29:45.881500  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:45.881876  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:29:45.881911  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:45.885484  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:45.886080  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:45.886109  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:45.886288  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:45.886498  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:45.886677  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:45.886828  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:29:45.969077  966381 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:29:45.973787  966381 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:29:45.973816  966381 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:29:45.973920  966381 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:29:45.973999  966381 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 18:29:45.974015  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /etc/ssl/certs/9513112.pem
	I0314 18:29:45.974096  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:29:45.984935  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:29:46.011260  966381 start.go:296] duration metric: took 129.788525ms for postStartSetup
	I0314 18:29:46.011314  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.011643  966381 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0314 18:29:46.011670  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.014882  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.015320  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.015348  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.015461  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.015649  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.015818  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.015958  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	W0314 18:29:46.095015  966381 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0314 18:29:46.095038  966381 fix.go:56] duration metric: took 1m31.868679135s for fixHost
	I0314 18:29:46.095062  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.097850  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.098240  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.098263  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.098402  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.098618  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.098810  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.098954  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.099135  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:29:46.099304  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:29:46.099313  966381 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:29:46.201375  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440986.158036020
	
	I0314 18:29:46.201405  966381 fix.go:216] guest clock: 1710440986.158036020
	I0314 18:29:46.201414  966381 fix.go:229] Guest: 2024-03-14 18:29:46.15803602 +0000 UTC Remote: 2024-03-14 18:29:46.095045686 +0000 UTC m=+92.011661084 (delta=62.990334ms)
	I0314 18:29:46.201440  966381 fix.go:200] guest clock delta is within tolerance: 62.990334ms
	I0314 18:29:46.201447  966381 start.go:83] releasing machines lock for "ha-105786", held for 1m31.975109616s
	I0314 18:29:46.201474  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.201805  966381 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:29:46.204592  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.205008  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.205040  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.205187  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.205819  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.205986  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.206081  966381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:29:46.206122  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.206196  966381 ssh_runner.go:195] Run: cat /version.json
	I0314 18:29:46.206230  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.209016  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.209269  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.209428  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.209454  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.209632  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.209841  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.209848  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.209879  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.210105  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.210118  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.210303  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:29:46.210337  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.210512  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.210654  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:29:46.286133  966381 ssh_runner.go:195] Run: systemctl --version
	I0314 18:29:46.314007  966381 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:29:46.480684  966381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:29:46.491962  966381 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:29:46.492020  966381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:29:46.502226  966381 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 18:29:46.502257  966381 start.go:494] detecting cgroup driver to use...
	I0314 18:29:46.502322  966381 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:29:46.518806  966381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:29:46.533532  966381 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:29:46.533603  966381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:29:46.547594  966381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:29:46.562640  966381 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:29:46.748094  966381 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:29:46.913423  966381 docker.go:233] disabling docker service ...
	I0314 18:29:46.913498  966381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:29:46.935968  966381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:29:46.951541  966381 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:29:47.101123  966381 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:29:47.251805  966381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:29:47.268510  966381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:29:47.289410  966381 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:29:47.289473  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.300842  966381 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:29:47.300915  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.311901  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.322706  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.333403  966381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:29:47.344317  966381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:29:47.353903  966381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:29:47.363435  966381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:29:47.507378  966381 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:29:50.656435  966381 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.149013693s)
	I0314 18:29:50.656479  966381 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:29:50.656556  966381 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:29:50.663052  966381 start.go:562] Will wait 60s for crictl version
	I0314 18:29:50.663110  966381 ssh_runner.go:195] Run: which crictl
	I0314 18:29:50.667345  966381 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:29:50.709178  966381 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:29:50.709266  966381 ssh_runner.go:195] Run: crio --version
	I0314 18:29:50.740672  966381 ssh_runner.go:195] Run: crio --version
	I0314 18:29:50.776822  966381 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:29:50.778198  966381 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:29:50.781287  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:50.781657  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:50.781682  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:50.781874  966381 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:29:50.787094  966381 kubeadm.go:877] updating cluster {Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:29:50.787242  966381 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:29:50.787286  966381 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:29:50.836424  966381 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:29:50.836452  966381 crio.go:415] Images already preloaded, skipping extraction
	I0314 18:29:50.836534  966381 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:29:50.883409  966381 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:29:50.883436  966381 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:29:50.883446  966381 kubeadm.go:928] updating node { 192.168.39.170 8443 v1.28.4 crio true true} ...
	I0314 18:29:50.883558  966381 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-105786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:29:50.883631  966381 ssh_runner.go:195] Run: crio config
	I0314 18:29:50.940266  966381 cni.go:84] Creating CNI manager for ""
	I0314 18:29:50.940294  966381 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0314 18:29:50.940311  966381 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:29:50.940343  966381 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-105786 NodeName:ha-105786 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:29:50.940523  966381 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-105786"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:29:50.940545  966381 kube-vip.go:105] generating kube-vip config ...
	I0314 18:29:50.940622  966381 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:29:50.940691  966381 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:29:50.952135  966381 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:29:50.952199  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0314 18:29:50.962147  966381 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0314 18:29:50.980512  966381 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:29:50.999194  966381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0314 18:29:51.017990  966381 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0314 18:29:51.037979  966381 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:29:51.042188  966381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:29:51.223538  966381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:29:51.259248  966381 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786 for IP: 192.168.39.170
	I0314 18:29:51.259276  966381 certs.go:194] generating shared ca certs ...
	I0314 18:29:51.259299  966381 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:29:51.259518  966381 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:29:51.259558  966381 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:29:51.259574  966381 certs.go:256] generating profile certs ...
	I0314 18:29:51.259649  966381 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key
	I0314 18:29:51.259676  966381 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054
	I0314 18:29:51.259692  966381 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170 192.168.39.245 192.168.39.190 192.168.39.254]
	I0314 18:29:51.368068  966381 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054 ...
	I0314 18:29:51.368106  966381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054: {Name:mk104c6891f4c562b4c5c1e2fd4fbf7ab8a19f7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:29:51.368317  966381 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054 ...
	I0314 18:29:51.368336  966381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054: {Name:mk5452416f251a959745d5afed1f6504eb414193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:29:51.368435  966381 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt
	I0314 18:29:51.368581  966381 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key
	I0314 18:29:51.368708  966381 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key
	I0314 18:29:51.368724  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:29:51.368742  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:29:51.368755  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:29:51.368765  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:29:51.368785  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:29:51.368797  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:29:51.368818  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:29:51.368830  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:29:51.368880  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 18:29:51.368907  966381 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 18:29:51.368918  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:29:51.368936  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:29:51.368955  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:29:51.368982  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:29:51.369020  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:29:51.369046  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.369061  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem -> /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.369070  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.369722  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:29:51.397901  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:29:51.424922  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:29:51.451188  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:29:51.476429  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 18:29:51.502245  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 18:29:51.528392  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:29:51.556131  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 18:29:51.582461  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:29:51.616725  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 18:29:51.644959  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 18:29:51.672417  966381 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:29:51.691534  966381 ssh_runner.go:195] Run: openssl version
	I0314 18:29:51.699328  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:29:51.711530  966381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.716524  966381 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.716584  966381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.722737  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:29:51.734143  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 18:29:51.746609  966381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.751712  966381 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.751761  966381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.758096  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 18:29:51.769099  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 18:29:51.781745  966381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.786840  966381 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.786914  966381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.793202  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:29:51.804099  966381 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:29:51.809375  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 18:29:51.815416  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 18:29:51.821550  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 18:29:51.827643  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 18:29:51.833646  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 18:29:51.839821  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 18:29:51.845930  966381 kubeadm.go:391] StartCluster: {Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:29:51.846070  966381 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 18:29:51.846123  966381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 18:29:51.887584  966381 cri.go:89] found id: "f7e5ce54c582966673026ea5da16491d518cf3c3aaeee9a7ceb0f7b5a1372c5f"
	I0314 18:29:51.887615  966381 cri.go:89] found id: "2f4ac879a201f1cf19940c5c6d0a391e018b4fb1e3238c5e18f108de7dfe9d49"
	I0314 18:29:51.887619  966381 cri.go:89] found id: "c1c9118d7d57dfe1750090185c762b93b050f71cdc3e71a82977bc49623be966"
	I0314 18:29:51.887621  966381 cri.go:89] found id: "ce25ae74cf40b4fa581afd2062a90e823cb3fa2657388e63ce08248941a6fad4"
	I0314 18:29:51.887624  966381 cri.go:89] found id: "ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675"
	I0314 18:29:51.887627  966381 cri.go:89] found id: "b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b"
	I0314 18:29:51.887630  966381 cri.go:89] found id: "4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817"
	I0314 18:29:51.887632  966381 cri.go:89] found id: "fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d"
	I0314 18:29:51.887634  966381 cri.go:89] found id: "50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9"
	I0314 18:29:51.887640  966381 cri.go:89] found id: "3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2"
	I0314 18:29:51.887642  966381 cri.go:89] found id: "dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183"
	I0314 18:29:51.887644  966381 cri.go:89] found id: "ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922"
	I0314 18:29:51.887646  966381 cri.go:89] found id: "ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc"
	I0314 18:29:51.887650  966381 cri.go:89] found id: ""
	I0314 18:29:51.887692  966381 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.166466781Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ebceac80-ae58-4c75-8b8e-1afb33e16605 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.174185202Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=8b5be3bb-0fc5-436e-ad2b-615b2039e330 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.174777066Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710441146174667245,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=8b5be3bb-0fc5-436e-ad2b-615b2039e330 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.177087758Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6bc88642-b58a-40ed-b965-0db7856311e5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.177146919Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6bc88642-b58a-40ed-b965-0db7856311e5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.182484747Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710440999492513300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710440998498516832,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c
889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942,PodSandboxId:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440997796066717,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710440998481172031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710440998251923020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf
6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kuber
netes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710440496387804188,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubern
etes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346674843863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346689801526,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710440342438475307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710440320771301059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710440320556512066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6bc88642-b58a-40ed-b965-0db7856311e5 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.237326283Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85901cb9-4b0c-4b98-884e-7b8dfc2162e5 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.237415584Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85901cb9-4b0c-4b98-884e-7b8dfc2162e5 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.238797375Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5b8bb5eb-1dc3-49eb-bf43-e24f560da65b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.239299126Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710441146239275508,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5b8bb5eb-1dc3-49eb-bf43-e24f560da65b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.240578900Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=581dd8ed-828d-49ac-a925-7c19484c24bb name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.240633157Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=581dd8ed-828d-49ac-a925-7c19484c24bb name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.241185160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710440999492513300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710440998498516832,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c
889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942,PodSandboxId:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440997796066717,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710440998481172031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710440998251923020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf
6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kuber
netes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710440496387804188,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubern
etes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346674843863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346689801526,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710440342438475307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710440320771301059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710440320556512066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=581dd8ed-828d-49ac-a925-7c19484c24bb name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.289311109Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d97e0df-d67f-478a-9d54-21e2e195cf69 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.289542233Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d97e0df-d67f-478a-9d54-21e2e195cf69 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.292071440Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=94d9d969-1462-4f2b-a603-7555c81b6d39 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.292818901Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710441146292786719,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=94d9d969-1462-4f2b-a603-7555c81b6d39 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.293466869Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9f33b85-80f2-4ed2-b465-967f88029cd9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.293519677Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9f33b85-80f2-4ed2-b465-967f88029cd9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.294141677Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710440999492513300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710440998498516832,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c
889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942,PodSandboxId:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440997796066717,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710440998481172031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710440998251923020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf
6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kuber
netes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710440496387804188,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubern
etes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346674843863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346689801526,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710440342438475307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710440320771301059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710440320556512066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9f33b85-80f2-4ed2-b465-967f88029cd9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.305576737Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1cfcc7d-9a6e-4ed4-a728-f86bd1152b3d name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.306054062Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-4h99c,Uid:6f1d3430-1aec-4155-8b75-951d851d54ae,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710441031499520920,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:21:34.785609145Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-cx8rc,Uid:d2e960de-67a9-4385-ba02-78a744602bcc,Namespace:kube-system,Attempt:1,},Sta
te:SANDBOX_READY,CreatedAt:1710440997873572735,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:19:05.130122921Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jsddl,Uid:bdbdea16-97b0-4581-8bab-9a472af11004,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997825478815,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.see
n: 2024-03-14T18:19:05.130015748Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-105786,Uid:dc5e46764078ce514b56622c3d7888bf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997804507421,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc5e46764078ce514b56622c3d7888bf,kubernetes.io/config.seen: 2024-03-14T18:18:50.238637371Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:566fc43f-5610-4dcd-b683-1cc87e6ed609,Namespace:kube-syste
m,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997780144979,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-14T18:19:05.127453312Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&PodSandboxMetadata{Name:kindnet-9b2pr,Uid:e23e9c49-0b7d-46ca-ae62-11e9b26a1280,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997770380748,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:19:00.650209605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-105786,Uid:6da
c53b7248a384afeccfc55d43bb2fb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997711563691,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.170:8443,kubernetes.io/config.hash: 6dac53b7248a384afeccfc55d43bb2fb,kubernetes.io/config.seen: 2024-03-14T18:18:50.238635960Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&PodSandboxMetadata{Name:kube-proxy-hd8mx,Uid:3e003f67-93dd-4105-a7bd-68d9af563ea4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997709812403,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.
name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:19:00.519257450Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-105786,Uid:ec78945afcff39cee32fcf6f6d645c30,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997697898517,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ec78945afcff39cee32fcf6f6d645c30,kubernetes.io/config.seen: 2024-03-14T18:18:50.238638790Z,kubernetes.io/config.source: file,},RuntimeHandle
r:,},&PodSandbox{Id:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&PodSandboxMetadata{Name:etcd-ha-105786,Uid:0cd908946f83a665c0ef77bb7bd5e5ea,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997687102648,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.170:2379,kubernetes.io/config.hash: 0cd908946f83a665c0ef77bb7bd5e5ea,kubernetes.io/config.seen: 2024-03-14T18:18:50.238632058Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-105786,Uid:8a8d15e80402cb826977826234ee3c6a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440991133922489,La
bels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{kubernetes.io/config.hash: 8a8d15e80402cb826977826234ee3c6a,kubernetes.io/config.seen: 2024-03-14T18:18:50.238639597Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=f1cfcc7d-9a6e-4ed4-a728-f86bd1152b3d name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.306811438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=51be2035-0dbf-4a59-82af-74ab87836b0a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.306860505Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=51be2035-0dbf-4a59-82af-74ab87836b0a name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:32:26 ha-105786 crio[4214]: time="2024-03-14 18:32:26.307043846Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.
terminationGracePeriod: 30,},},&Container{Id:a3c889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol
\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.cont
ainer.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-s
cheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kub
e-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=51be2035-0dbf-4a59-82af-74ab87836b0a name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	9604ba67bf9c8       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               3                   219b0738aa779       kindnet-9b2pr
	3bb0081eb2a08       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       4                   01cedb7ac2289       storage-provisioner
	e23ea5e28f6e5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   2                   dba584eb3e199       kube-controller-manager-ha-105786
	73d5356f4c557       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            3                   ca368554eaafd       kube-apiserver-ha-105786
	3704dc6ef5119       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      About a minute ago   Running             busybox                   1                   690093c450be8       busybox-5b5d89c9d6-4h99c
	ae7c1c8fbe250       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago        Exited              storage-provisioner       3                   01cedb7ac2289       storage-provisioner
	dbe0af9dcb333       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      2 minutes ago        Running             kube-proxy                1                   6fd8ebd3d1377       kube-proxy-hd8mx
	25d7b21ffe66f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   55727b536f419       coredns-5dd5756b68-cx8rc
	a3c889282a698       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      2 minutes ago        Running             coredns                   1                   be55e1f9341f6       coredns-5dd5756b68-jsddl
	6d30bfdc11c1c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago        Exited              kindnet-cni               2                   219b0738aa779       kindnet-9b2pr
	28469939f60bd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago        Exited              kube-controller-manager   1                   dba584eb3e199       kube-controller-manager-ha-105786
	4269a70d03936       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      2 minutes ago        Running             kube-scheduler            1                   3374ed1f059b9       kube-scheduler-ha-105786
	782768ee692d7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago        Exited              kube-apiserver            2                   ca368554eaafd       kube-apiserver-ha-105786
	56ede7d89c5f7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      2 minutes ago        Running             etcd                      1                   22995c012df03       etcd-ha-105786
	ded817e115254       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      2 minutes ago        Exited              kube-vip                  8                   d61621e3b64fc       kube-vip-ha-105786
	522fa7bdb84ee       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   10 minutes ago       Exited              busybox                   0                   c09b6e29d418a       busybox-5b5d89c9d6-4h99c
	b538852248364       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   c6041a600821e       coredns-5dd5756b68-cx8rc
	4fbdd8b34ac46       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      13 minutes ago       Exited              coredns                   0                   880e93f2a3ed5       coredns-5dd5756b68-jsddl
	50a3dcdc83e53       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      13 minutes ago       Exited              kube-proxy                0                   6d43c44b3e99b       kube-proxy-hd8mx
	3f27ba9bd31a4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      13 minutes ago       Exited              kube-scheduler            0                   ed2bf5bc80b8e       kube-scheduler-ha-105786
	ff7528019bad0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      13 minutes ago       Exited              etcd                      0                   c3fe1175987df       etcd-ha-105786
	
	
	==> coredns [25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34707 - 24413 "HINFO IN 4449984729202792723.1825095687933891679. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008723816s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54512->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817] <==
	[INFO] 10.244.0.4:39798 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049996s
	[INFO] 10.244.0.4:39218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088514s
	[INFO] 10.244.2.2:53227 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129249s
	[INFO] 10.244.1.2:38289 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162454s
	[INFO] 10.244.1.2:39880 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157216s
	[INFO] 10.244.0.4:40457 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166755s
	[INFO] 10.244.0.4:47654 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165231s
	[INFO] 10.244.2.2:56922 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021872s
	[INFO] 10.244.2.2:55729 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082709s
	[INFO] 10.244.2.2:40076 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091316s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1530&timeout=6m55s&timeoutSeconds=415&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1535&timeout=8m55s&timeoutSeconds=535&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[86211054]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:00.833) (total time: 12284ms):
	Trace[86211054]: ---"Objects listed" error:Unauthorized 12284ms (18:28:13.118)
	Trace[86211054]: [12.284846568s] [12.284846568s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[112395228]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:01.095) (total time: 12023ms):
	Trace[112395228]: ---"Objects listed" error:Unauthorized 12023ms (18:28:13.118)
	Trace[112395228]: [12.023135648s] [12.023135648s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a3c889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44600 - 56811 "HINFO IN 296503675936248183.3736117151437012322. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006872615s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43650->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b] <==
	[INFO] 10.244.2.2:35209 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000219373s
	[INFO] 10.244.1.2:37537 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135802s
	[INFO] 10.244.1.2:50389 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105335s
	[INFO] 10.244.0.4:53486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200332s
	[INFO] 10.244.0.4:53550 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000308188s
	[INFO] 10.244.2.2:59521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134191s
	[INFO] 10.244.1.2:43514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127501s
	[INFO] 10.244.1.2:54638 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089684s
	[INFO] 10.244.1.2:43811 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187377s
	[INFO] 10.244.1.2:38538 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164864s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1535&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1529&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[601454333]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:00.779) (total time: 12338ms):
	Trace[601454333]: ---"Objects listed" error:Unauthorized 12338ms (18:28:13.117)
	Trace[601454333]: [12.338439408s] [12.338439408s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1851486924]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:00.685) (total time: 12432ms):
	Trace[1851486924]: ---"Objects listed" error:Unauthorized 12432ms (18:28:13.118)
	Trace[1851486924]: [12.432164604s] [12.432164604s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-105786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_18_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:32:22 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:19:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-105786
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 83805f81be844e0c8f423f0d34e721b6
	  System UUID:                83805f81-be84-4e0c-8f42-3f0d34e721b6
	  Boot ID:                    592e9c66-43d6-494c-b6d9-c848f3c684fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4h99c             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 coredns-5dd5756b68-cx8rc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 coredns-5dd5756b68-jsddl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     13m
	  kube-system                 etcd-ha-105786                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-9b2pr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-105786             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-105786    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-hd8mx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-105786             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-105786                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 101s                   kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m (x8 over 13m)      kubelet          Node ha-105786 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     13m (x7 over 13m)      kubelet          Node ha-105786 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    13m (x8 over 13m)      kubelet          Node ha-105786 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  13m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 13m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  13m                    kubelet          Node ha-105786 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    13m                    kubelet          Node ha-105786 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     13m                    kubelet          Node ha-105786 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           13m                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   NodeReady                13m                    kubelet          Node ha-105786 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   RegisteredNode           10m                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Warning  ContainerGCFailed        2m36s (x2 over 3m36s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           93s                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   RegisteredNode           91s                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   RegisteredNode           33s                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	
	
	Name:               ha-105786-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_20_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:19:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:32:17 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-105786-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d19ca741ee10483194e2397e40db9727
	  System UUID:                d19ca741-ee10-4831-94e2-397e40db9727
	  Boot ID:                    1fd6919e-b89c-4ad1-b096-54490f7c15ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-k6gxp                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-105786-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         12m
	  kube-system                 kindnet-vpgvl                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      12m
	  kube-system                 kube-apiserver-ha-105786-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-controller-manager-ha-105786-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-proxy-qpz89                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-scheduler-ha-105786-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	  kube-system                 kube-vip-ha-105786-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         12m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                  From             Message
	  ----    ------                   ----                 ----             -------
	  Normal  Starting                 12m                  kube-proxy       
	  Normal  Starting                 87s                  kube-proxy       
	  Normal  RegisteredNode           12m                  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  RegisteredNode           10m                  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  NodeNotReady             9m2s                 node-controller  Node ha-105786-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    2m9s (x8 over 2m9s)  kubelet          Node ha-105786-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m9s                 kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m9s (x8 over 2m9s)  kubelet          Node ha-105786-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m9s (x7 over 2m9s)  kubelet          Node ha-105786-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m9s                 kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           93s                  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  RegisteredNode           91s                  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  RegisteredNode           33s                  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	
	
	Name:               ha-105786-m03
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_21_14_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:21:10 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m03
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:32:24 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:31:53 +0000   Thu, 14 Mar 2024 18:21:10 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:31:53 +0000   Thu, 14 Mar 2024 18:21:10 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:31:53 +0000   Thu, 14 Mar 2024 18:21:10 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:31:53 +0000   Thu, 14 Mar 2024 18:21:19 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.190
	  Hostname:    ha-105786-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 52afc1e0396540bd95588eb0b4583ac2
	  System UUID:                52afc1e0-3965-40bd-9558-8eb0b4583ac2
	  Boot ID:                    75257f3b-efab-41d6-9be3-6463a8c0e614
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-g4zv5                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 etcd-ha-105786-m03                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         11m
	  kube-system                 kindnet-gmvl5                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-apiserver-ha-105786-m03             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-controller-manager-ha-105786-m03    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-proxy-6rjsv                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-scheduler-ha-105786-m03             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 kube-vip-ha-105786-m03                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age   From             Message
	  ----     ------                   ----  ----             -------
	  Normal   Starting                 11m   kube-proxy       
	  Normal   Starting                 42s   kube-proxy       
	  Normal   RegisteredNode           11m   node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	  Normal   RegisteredNode           11m   node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	  Normal   RegisteredNode           10m   node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	  Normal   RegisteredNode           93s   node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	  Normal   RegisteredNode           91s   node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	  Normal   Starting                 64s   kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  64s   kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  64s   kubelet          Node ha-105786-m03 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    64s   kubelet          Node ha-105786-m03 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     64s   kubelet          Node ha-105786-m03 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 64s   kubelet          Node ha-105786-m03 has been rebooted, boot id: 75257f3b-efab-41d6-9be3-6463a8c0e614
	  Normal   RegisteredNode           33s   node-controller  Node ha-105786-m03 event: Registered Node ha-105786-m03 in Controller
	
	
	Name:               ha-105786-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_22_12_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:32:18 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:32:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:32:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:32:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:32:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    ha-105786-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e09570d6bc045a59dfec434fd490a91
	  System UUID:                7e09570d-6bc0-45a5-9dfe-c434fd490a91
	  Boot ID:                    ebe73455-63b4-449d-a759-d50e720d4746
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-fzjdr       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      10m
	  kube-system                 kube-proxy-bftws    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                 From             Message
	  ----     ------                   ----                ----             -------
	  Normal   Starting                 10m                 kube-proxy       
	  Normal   Starting                 5s                  kube-proxy       
	  Normal   RegisteredNode           10m                 node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   RegisteredNode           10m                 node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   RegisteredNode           10m                 node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   NodeNotReady             9m12s               node-controller  Node ha-105786-m04 status is now: NodeNotReady
	  Normal   NodeReady                8m2s (x2 over 10m)  kubelet          Node ha-105786-m04 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  8m2s (x6 over 10m)  kubelet          Node ha-105786-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m2s (x6 over 10m)  kubelet          Node ha-105786-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m2s (x6 over 10m)  kubelet          Node ha-105786-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           93s                 node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   RegisteredNode           91s                 node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   NodeNotReady             53s                 node-controller  Node ha-105786-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           33s                 node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   Starting                 8s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  8s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  8s (x2 over 8s)     kubelet          Node ha-105786-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8s (x2 over 8s)     kubelet          Node ha-105786-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8s (x2 over 8s)     kubelet          Node ha-105786-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 8s                  kubelet          Node ha-105786-m04 has been rebooted, boot id: ebe73455-63b4-449d-a759-d50e720d4746
	  Normal   NodeReady                8s                  kubelet          Node ha-105786-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.323219] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.062088] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057119] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.191786] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.127293] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.261879] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +5.345908] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.065032] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.795309] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.848496] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.157868] kauditd_printk_skb: 51 callbacks suppressed
	[  +2.914153] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[Mar14 18:19] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.437406] kauditd_printk_skb: 73 callbacks suppressed
	[Mar14 18:29] systemd-fstab-generator[4133]: Ignoring "noauto" option for root device
	[  +0.177931] systemd-fstab-generator[4145]: Ignoring "noauto" option for root device
	[  +0.190681] systemd-fstab-generator[4159]: Ignoring "noauto" option for root device
	[  +0.144149] systemd-fstab-generator[4171]: Ignoring "noauto" option for root device
	[  +0.260798] systemd-fstab-generator[4195]: Ignoring "noauto" option for root device
	[  +3.681548] systemd-fstab-generator[4302]: Ignoring "noauto" option for root device
	[  +6.707178] kauditd_printk_skb: 127 callbacks suppressed
	[Mar14 18:30] kauditd_printk_skb: 88 callbacks suppressed
	[ +27.847589] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.765007] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659] <==
	{"level":"warn","ts":"2024-03-14T18:31:33.954584Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"5f103b5cc98956f4","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"167.647779ms"}
	{"level":"warn","ts":"2024-03-14T18:31:33.954762Z","caller":"etcdserver/raft.go:416","msg":"leader failed to send out heartbeat on time; took too long, leader is overloaded likely from slow disk","to":"49a9455a573a24bd","heartbeat-interval":"100ms","expected-duration":"200ms","exceeded-duration":"167.807614ms"}
	{"level":"warn","ts":"2024-03-14T18:31:33.969089Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"376.63044ms","expected-duration":"100ms","prefix":"","request":"header:<ID:8305638535720496742 username:\"kube-apiserver-etcd-client\" auth_revision:1 > txn:<compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:1911 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2854 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >>","response":"size:16"}
	{"level":"info","ts":"2024-03-14T18:31:33.97031Z","caller":"traceutil/trace.go:171","msg":"trace[1859440374] transaction","detail":"{read_only:false; response_revision:1941; number_of_response:1; }","duration":"507.231046ms","start":"2024-03-14T18:31:33.462999Z","end":"2024-03-14T18:31:33.97023Z","steps":["trace[1859440374] 'process raft request'  (duration: 124.190253ms)","trace[1859440374] 'compare'  (duration: 369.09135ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T18:31:33.970505Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:31:33.462983Z","time spent":"507.472905ms","remote":"127.0.0.1:54692","response type":"/etcdserverpb.KV/Txn","request count":1,"request size":2905,"response count":0,"response size":38,"request content":"compare:<target:MOD key:\"/registry/daemonsets/kube-system/kube-proxy\" mod_revision:1911 > success:<request_put:<key:\"/registry/daemonsets/kube-system/kube-proxy\" value_size:2854 >> failure:<request_range:<key:\"/registry/daemonsets/kube-system/kube-proxy\" > >"}
	{"level":"info","ts":"2024-03-14T18:31:33.972187Z","caller":"traceutil/trace.go:171","msg":"trace[1270221998] linearizableReadLoop","detail":"{readStateIndex:2313; appliedIndex:2312; }","duration":"356.047908ms","start":"2024-03-14T18:31:33.616113Z","end":"2024-03-14T18:31:33.972161Z","steps":["trace[1270221998] 'read index received'  (duration: 345.24441ms)","trace[1270221998] 'applied index is now lower than readState.Index'  (duration: 10.801452ms)"],"step_count":2}
	{"level":"warn","ts":"2024-03-14T18:31:33.972416Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"356.311819ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/ha-105786-m04\" ","response":"range_response_count:1 size:4173"}
	{"level":"info","ts":"2024-03-14T18:31:33.972501Z","caller":"traceutil/trace.go:171","msg":"trace[1142418684] range","detail":"{range_begin:/registry/minions/ha-105786-m04; range_end:; response_count:1; response_revision:1942; }","duration":"356.400929ms","start":"2024-03-14T18:31:33.616088Z","end":"2024-03-14T18:31:33.972489Z","steps":["trace[1142418684] 'agreement among raft nodes before linearized reading'  (duration: 356.183524ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:31:33.972529Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:31:33.616074Z","time spent":"356.444578ms","remote":"127.0.0.1:54402","response type":"/etcdserverpb.KV/Range","request count":0,"request size":33,"response count":1,"response size":4195,"request content":"key:\"/registry/minions/ha-105786-m04\" "}
	{"level":"warn","ts":"2024-03-14T18:31:33.974311Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"176.270745ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/pods/kube-system/kube-apiserver-ha-105786-m03\" ","response":"range_response_count:1 size:6556"}
	{"level":"info","ts":"2024-03-14T18:31:33.974406Z","caller":"traceutil/trace.go:171","msg":"trace[1021992726] range","detail":"{range_begin:/registry/pods/kube-system/kube-apiserver-ha-105786-m03; range_end:; response_count:1; response_revision:1942; }","duration":"176.377117ms","start":"2024-03-14T18:31:33.798017Z","end":"2024-03-14T18:31:33.974394Z","steps":["trace[1021992726] 'agreement among raft nodes before linearized reading'  (duration: 176.221055ms)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:31:33.974464Z","caller":"etcdserver/util.go:170","msg":"apply request took too long","took":"263.461699ms","expected-duration":"100ms","prefix":"read-only range ","request":"key:\"/registry/minions/\" range_end:\"/registry/minions0\" ","response":"range_response_count:4 size:19876"}
	{"level":"info","ts":"2024-03-14T18:31:33.97453Z","caller":"traceutil/trace.go:171","msg":"trace[842259949] range","detail":"{range_begin:/registry/minions/; range_end:/registry/minions0; response_count:4; response_revision:1942; }","duration":"263.526605ms","start":"2024-03-14T18:31:33.71099Z","end":"2024-03-14T18:31:33.974516Z","steps":["trace[842259949] 'agreement among raft nodes before linearized reading'  (duration: 262.681327ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:31:34.667559Z","caller":"rafthttp/peer_status.go:53","msg":"peer became active","peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:31:34.688507Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b385368e7357343","to":"49a9455a573a24bd","stream-type":"stream MsgApp v2"}
	{"level":"info","ts":"2024-03-14T18:31:34.688641Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:31:34.692506Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:31:34.698048Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:31:34.705084Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b385368e7357343","to":"49a9455a573a24bd","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-14T18:31:34.705143Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:31:34.713654Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.190:55874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-03-14T18:31:34.744598Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"49a9455a573a24bd","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-14T18:31:34.744677Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"49a9455a573a24bd","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-14T18:31:41.235943Z","caller":"traceutil/trace.go:171","msg":"trace[1249088434] transaction","detail":"{read_only:false; response_revision:1970; number_of_response:1; }","duration":"103.823426ms","start":"2024-03-14T18:31:41.132016Z","end":"2024-03-14T18:31:41.235839Z","steps":["trace[1249088434] 'process raft request'  (duration: 103.664046ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:31:41.235959Z","caller":"traceutil/trace.go:171","msg":"trace[246627116] transaction","detail":"{read_only:false; response_revision:1969; number_of_response:1; }","duration":"110.52398ms","start":"2024-03-14T18:31:41.125415Z","end":"2024-03-14T18:31:41.235939Z","steps":["trace[246627116] 'process raft request'  (duration: 110.04873ms)"],"step_count":1}
	
	
	==> etcd [ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc] <==
	{"level":"info","ts":"2024-03-14T18:28:15.122911Z","caller":"traceutil/trace.go:171","msg":"trace[256093788] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; }","duration":"8.014776074s","start":"2024-03-14T18:28:07.108124Z","end":"2024-03-14T18:28:15.122901Z","steps":["trace[256093788] 'agreement among raft nodes before linearized reading'  (duration: 8.009384348s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:28:15.122939Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:28:07.108122Z","time spent":"8.014802071s","remote":"127.0.0.1:48602","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" limit:10000 "}
	WARNING: 2024/03/14 18:28:15 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-14T18:28:15.120149Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:28:07.116653Z","time spent":"8.003484208s","remote":"127.0.0.1:48846","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:10000 "}
	WARNING: 2024/03/14 18:28:15 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-14T18:28:15.184325Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.170:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T18:28:15.184437Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.170:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-14T18:28:15.184672Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6b385368e7357343","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-14T18:28:15.185082Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185145Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185176Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.18525Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185323Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185356Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185366Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185371Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185379Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185424Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185472Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185596Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185822Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185872Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.190126Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.170:2380"}
	{"level":"info","ts":"2024-03-14T18:28:15.190236Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.170:2380"}
	{"level":"info","ts":"2024-03-14T18:28:15.190247Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-105786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.170:2380"],"advertise-client-urls":["https://192.168.39.170:2379"]}
	
	
	==> kernel <==
	 18:32:27 up 14 min,  0 users,  load average: 0.34, 0.42, 0.26
	Linux ha-105786 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b] <==
	I0314 18:29:59.107185       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0314 18:30:02.316124       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0314 18:30:05.389271       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0314 18:30:16.399467       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0314 18:30:23.820510       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.15:54244->10.96.0.1:443: read: connection reset by peer
	I0314 18:30:26.896814       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad] <==
	I0314 18:31:53.363309       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:32:03.372392       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:32:03.372442       1 main.go:227] handling current node
	I0314 18:32:03.372453       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:32:03.372459       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:32:03.372560       1 main.go:223] Handling node with IPs: map[192.168.39.190:{}]
	I0314 18:32:03.372590       1 main.go:250] Node ha-105786-m03 has CIDR [10.244.2.0/24] 
	I0314 18:32:03.372653       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:32:03.372742       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:32:13.391882       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:32:13.392008       1 main.go:227] handling current node
	I0314 18:32:13.392051       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:32:13.392081       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:32:13.392246       1 main.go:223] Handling node with IPs: map[192.168.39.190:{}]
	I0314 18:32:13.392297       1 main.go:250] Node ha-105786-m03 has CIDR [10.244.2.0/24] 
	I0314 18:32:13.392387       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:32:13.392416       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:32:23.402365       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:32:23.402490       1 main.go:227] handling current node
	I0314 18:32:23.402535       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:32:23.402554       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:32:23.402804       1 main.go:223] Handling node with IPs: map[192.168.39.190:{}]
	I0314 18:32:23.402864       1 main.go:250] Node ha-105786-m03 has CIDR [10.244.2.0/24] 
	I0314 18:32:23.402967       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:32:23.402991       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f] <==
	I0314 18:30:41.610329       1 cache.go:39] Caches are synced for AvailableConditionController controller
	E0314 18:30:41.612319       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0314 18:30:41.612403       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	I0314 18:30:41.613271       1 shared_informer.go:318] Caches are synced for configmaps
	E0314 18:30:41.613373       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	I0314 18:30:41.613420       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 18:30:41.613426       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0314 18:30:41.613526       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0314 18:30:41.619858       1 timeout.go:142] post-timeout activity - time-elapsed: 8.755378ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result: <nil>
	I0314 18:30:41.628820       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0314 18:30:41.647391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.190]
	I0314 18:30:41.648795       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 18:30:41.653582       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 18:30:41.653682       1 aggregator.go:166] initial CRD sync complete...
	I0314 18:30:41.653783       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 18:30:41.653807       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 18:30:41.653831       1 cache.go:39] Caches are synced for autoregister controller
	I0314 18:30:41.671104       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0314 18:30:41.690757       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0314 18:30:42.509110       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0314 18:30:42.924430       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.170 192.168.39.190 192.168.39.245]
	I0314 18:31:33.971872       1 trace.go:236] Trace[27322001]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:3a44bd88-72c0-4501-b4d7-81437002d596,client:192.168.39.170,protocol:HTTP/2.0,resource:daemonsets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy/status,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:daemon-set-controller,verb:PUT (14-Mar-2024 18:31:33.455) (total time: 516ms):
	Trace[27322001]: ["GuaranteedUpdate etcd3" audit-id:3a44bd88-72c0-4501-b4d7-81437002d596,key:/daemonsets/kube-system/kube-proxy,type:*apps.DaemonSet,resource:daemonsets.apps 515ms (18:31:33.456)
	Trace[27322001]:  ---"Txn call completed" 509ms (18:31:33.971)]
	Trace[27322001]: [516.725224ms] [516.725224ms] END
	
	
	==> kube-apiserver [782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5] <==
	I0314 18:29:59.229131       1 options.go:220] external host was not specified, using 192.168.39.170
	I0314 18:29:59.230543       1 server.go:148] Version: v1.28.4
	I0314 18:29:59.230609       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:29:59.822983       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0314 18:29:59.835789       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0314 18:29:59.835914       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0314 18:29:59.836208       1 instance.go:298] Using reconciler: lease
	W0314 18:30:19.820801       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0314 18:30:19.823745       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0314 18:30:19.837423       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0314 18:30:19.837443       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e] <==
	I0314 18:30:00.096891       1 serving.go:348] Generated self-signed cert in-memory
	I0314 18:30:00.761362       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 18:30:00.761449       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:30:00.763371       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 18:30:00.763568       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 18:30:00.764667       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 18:30:00.764857       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0314 18:30:20.844984       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.170:8443/healthz\": dial tcp 192.168.39.170:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f] <==
	I0314 18:30:55.398334       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="142.039µs"
	I0314 18:30:55.398521       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="78.635µs"
	I0314 18:30:55.398575       1 shared_informer.go:318] Caches are synced for endpoint
	I0314 18:30:55.792069       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 18:30:55.792157       1 garbagecollector.go:166] "All resource monitors have synced. Proceeding to collect garbage"
	I0314 18:30:55.831047       1 shared_informer.go:318] Caches are synced for garbage collector
	I0314 18:30:56.384380       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-gdtqn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gdtqn\": the object has been modified; please apply your changes to the latest version and try again"
	I0314 18:30:56.384681       1 event.go:298] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9422523c-2fbd-4f44-afb9-9f7a998baf3a", APIVersion:"v1", ResourceVersion:"308", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gdtqn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gdtqn": the object has been modified; please apply your changes to the latest version and try again
	I0314 18:30:56.392349       1 event.go:307] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0314 18:30:56.412275       1 endpointslice_controller.go:310] "Error syncing endpoint slices for service, retrying" key="kube-system/kube-dns" err="failed to update kube-dns-gdtqn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io \"kube-dns-gdtqn\": the object has been modified; please apply your changes to the latest version and try again"
	I0314 18:30:56.412796       1 event.go:298] Event(v1.ObjectReference{Kind:"Service", Namespace:"kube-system", Name:"kube-dns", UID:"9422523c-2fbd-4f44-afb9-9f7a998baf3a", APIVersion:"v1", ResourceVersion:"308", FieldPath:""}): type: 'Warning' reason: 'FailedToUpdateEndpointSlices' Error updating Endpoint Slices for Service kube-system/kube-dns: failed to update kube-dns-gdtqn EndpointSlice for Service kube-system/kube-dns: Operation cannot be fulfilled on endpointslices.discovery.k8s.io "kube-dns-gdtqn": the object has been modified; please apply your changes to the latest version and try again
	I0314 18:30:56.414368       1 event.go:307] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0314 18:30:56.433057       1 event.go:307] "Event occurred" object="kube-system/kube-dns" fieldPath="" kind="Endpoints" apiVersion="v1" type="Warning" reason="FailedToUpdateEndpoint" message="Failed to update endpoint kube-system/kube-dns: Operation cannot be fulfilled on endpoints \"kube-dns\": the object has been modified; please apply your changes to the latest version and try again"
	I0314 18:30:56.445987       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="95.539349ms"
	I0314 18:30:56.446183       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="53.647µs"
	I0314 18:30:56.487063       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="40.692598ms"
	I0314 18:30:56.487618       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/coredns-5dd5756b68" duration="254.1µs"
	I0314 18:30:57.218235       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="312.76µs"
	I0314 18:31:00.731567       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="7.412856ms"
	I0314 18:31:00.732299       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="260.278µs"
	I0314 18:31:23.907654       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.551996ms"
	I0314 18:31:23.907868       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.296µs"
	I0314 18:31:43.333440       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="10.772694ms"
	I0314 18:31:43.333819       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="155.615µs"
	I0314 18:32:18.356880       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-105786-m04"
	
	
	==> kube-proxy [50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9] <==
	I0314 18:19:02.664250       1 node.go:141] Successfully retrieved node IP: 192.168.39.170
	I0314 18:19:02.714117       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:19:02.714162       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:19:02.716766       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:19:02.717745       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:19:02.718063       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:19:02.718100       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:19:02.720154       1 config.go:188] "Starting service config controller"
	I0314 18:19:02.720408       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:19:02.720465       1 config.go:315] "Starting node config controller"
	I0314 18:19:02.720491       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:19:02.721511       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:19:02.721558       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:19:02.820597       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:19:02.820664       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:19:02.822840       1 shared_informer.go:318] Caches are synced for endpoint slice config
	E0314 18:28:13.122344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	E0314 18:28:13.122338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	E0314 18:28:13.122518       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	W0314 18:28:15.103846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Unauthorized
	E0314 18:28:15.103895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	W0314 18:28:15.103973       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Unauthorized
	E0314 18:28:15.103981       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	W0314 18:28:15.107552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Unauthorized
	E0314 18:28:15.107669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	
	
	==> kube-proxy [dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c] <==
	I0314 18:30:00.646660       1 server_others.go:69] "Using iptables proxy"
	E0314 18:30:03.724306       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:06.798355       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:09.869081       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:16.012282       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:28.301082       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	I0314 18:30:44.907767       1 node.go:141] Successfully retrieved node IP: 192.168.39.170
	I0314 18:30:44.959317       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:30:44.959379       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:30:44.963966       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:30:44.964077       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:30:44.964340       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:30:44.964376       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:30:44.966061       1 config.go:188] "Starting service config controller"
	I0314 18:30:44.966133       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:30:44.966161       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:30:44.966165       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:30:44.966945       1 config.go:315] "Starting node config controller"
	I0314 18:30:44.966981       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:30:45.066672       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 18:30:45.066839       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:30:45.067090       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2] <==
	W0314 18:28:07.898881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 18:28:07.898938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 18:28:07.926438       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 18:28:07.926552       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 18:28:08.127945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 18:28:08.128049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 18:28:08.703213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 18:28:08.703272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:28:08.811171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 18:28:08.811297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 18:28:09.018063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 18:28:09.018138       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 18:28:09.165241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 18:28:09.165338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 18:28:09.363080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 18:28:09.363186       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 18:28:10.170615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 18:28:10.170677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 18:28:10.170950       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 18:28:10.171006       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 18:28:10.215817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 18:28:10.215918       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 18:28:14.418627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 18:28:14.418766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 18:28:15.079433       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32] <==
	W0314 18:30:38.048182       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: Get "https://192.168.39.170:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:38.048288       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://192.168.39.170:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:38.149405       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: Get "https://192.168.39.170:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:38.149501       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: Get "https://192.168.39.170:8443/apis/storage.k8s.io/v1/csinodes?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:38.717457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.170:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:38.717554       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.170:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:38.957547       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.170:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:38.957635       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.170:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:39.067393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.170:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:39.067456       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.170:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:39.432829       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.170:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:39.432878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.170:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:41.548905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 18:30:41.548991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 18:30:41.549072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 18:30:41.549103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:30:41.549174       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 18:30:41.549221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 18:30:41.549408       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 18:30:41.549568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 18:30:41.549594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 18:30:41.549738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 18:30:41.559235       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 18:30:41.560822       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 18:30:52.555852       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 18:30:50 ha-105786 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:30:50 ha-105786 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:30:55 ha-105786 kubelet[1439]: I0314 18:30:55.303262    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:30:55 ha-105786 kubelet[1439]: E0314 18:30:55.303622    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:31:01 ha-105786 kubelet[1439]: I0314 18:31:01.304323    1439 scope.go:117] "RemoveContainer" containerID="ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477"
	Mar 14 18:31:02 ha-105786 kubelet[1439]: I0314 18:31:02.303991    1439 scope.go:117] "RemoveContainer" containerID="6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b"
	Mar 14 18:31:07 ha-105786 kubelet[1439]: I0314 18:31:07.304339    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:31:07 ha-105786 kubelet[1439]: E0314 18:31:07.304819    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:31:18 ha-105786 kubelet[1439]: I0314 18:31:18.304241    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:31:18 ha-105786 kubelet[1439]: E0314 18:31:18.306367    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:31:33 ha-105786 kubelet[1439]: I0314 18:31:33.303845    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:31:33 ha-105786 kubelet[1439]: E0314 18:31:33.304139    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:31:44 ha-105786 kubelet[1439]: I0314 18:31:44.304481    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:31:44 ha-105786 kubelet[1439]: E0314 18:31:44.304998    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:31:50 ha-105786 kubelet[1439]: E0314 18:31:50.364412    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:31:50 ha-105786 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:31:50 ha-105786 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:31:50 ha-105786 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:31:50 ha-105786 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:31:56 ha-105786 kubelet[1439]: I0314 18:31:56.303920    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:31:56 ha-105786 kubelet[1439]: E0314 18:31:56.304283    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:32:07 ha-105786 kubelet[1439]: I0314 18:32:07.303934    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:32:07 ha-105786 kubelet[1439]: E0314 18:32:07.304975    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:32:18 ha-105786 kubelet[1439]: I0314 18:32:18.304386    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:32:18 ha-105786 kubelet[1439]: E0314 18:32:18.306308    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 18:32:25.793534  967430 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18384-942544/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-105786 -n ha-105786
helpers_test.go:261: (dbg) Run:  kubectl --context ha-105786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/RestartClusterKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/RestartClusterKeepsNodes (376.91s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeleteSecondaryNode (48.97s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeleteSecondaryNode
ha_test.go:487: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 node delete m03 -v=7 --alsologtostderr
ha_test.go:487: (dbg) Done: out/minikube-linux-amd64 -p ha-105786 node delete m03 -v=7 --alsologtostderr: (16.711051188s)
ha_test.go:493: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
ha_test.go:493: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr: exit status 2 (29.663300897s)

                                                
                                                
-- stdout --
	ha-105786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Stopped
	kubeconfig: Configured
	
	ha-105786-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m04
	type: Worker
	host: Running
	kubelet: Running
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:32:44.959990  967674 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:32:44.960143  967674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:32:44.960154  967674 out.go:304] Setting ErrFile to fd 2...
	I0314 18:32:44.960158  967674 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:32:44.960384  967674 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:32:44.960566  967674 out.go:298] Setting JSON to false
	I0314 18:32:44.960595  967674 mustload.go:65] Loading cluster: ha-105786
	I0314 18:32:44.960637  967674 notify.go:220] Checking for updates...
	I0314 18:32:44.960969  967674 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:32:44.960985  967674 status.go:255] checking status of ha-105786 ...
	I0314 18:32:44.961353  967674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:32:44.961411  967674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:32:44.980154  967674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41799
	I0314 18:32:44.980560  967674 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:32:44.981114  967674 main.go:141] libmachine: Using API Version  1
	I0314 18:32:44.981136  967674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:32:44.981519  967674 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:32:44.981739  967674 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:32:44.983544  967674 status.go:330] ha-105786 host status = "Running" (err=<nil>)
	I0314 18:32:44.983570  967674 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:32:44.983850  967674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:32:44.983882  967674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:32:44.999193  967674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39971
	I0314 18:32:44.999645  967674 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:32:45.000161  967674 main.go:141] libmachine: Using API Version  1
	I0314 18:32:45.000177  967674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:32:45.000516  967674 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:32:45.000700  967674 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:32:45.003444  967674 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:32:45.003926  967674 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:32:45.003972  967674 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:32:45.004048  967674 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:32:45.004349  967674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:32:45.004387  967674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:32:45.020538  967674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37073
	I0314 18:32:45.020962  967674 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:32:45.021443  967674 main.go:141] libmachine: Using API Version  1
	I0314 18:32:45.021466  967674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:32:45.021849  967674 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:32:45.022012  967674 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:32:45.022216  967674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:32:45.022248  967674 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:32:45.025578  967674 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:32:45.025993  967674 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:32:45.026018  967674 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:32:45.026170  967674 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:32:45.026354  967674 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:32:45.026490  967674 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:32:45.026611  967674 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:32:45.104812  967674 ssh_runner.go:195] Run: systemctl --version
	I0314 18:32:45.112020  967674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:32:45.129157  967674 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:32:45.129184  967674 api_server.go:166] Checking apiserver status ...
	I0314 18:32:45.129220  967674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:32:45.145885  967674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5437/cgroup
	W0314 18:32:45.159019  967674 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5437/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:32:45.159062  967674 ssh_runner.go:195] Run: ls
	I0314 18:32:45.163993  967674 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:32:50.165298  967674 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 18:32:50.165369  967674 retry.go:31] will retry after 219.228111ms: state is "Stopped"
	I0314 18:32:50.384728  967674 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:32:55.386031  967674 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 18:32:55.386100  967674 retry.go:31] will retry after 279.622151ms: state is "Stopped"
	I0314 18:32:55.666590  967674 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:33:00.667422  967674 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 18:33:00.667481  967674 status.go:422] ha-105786 apiserver status = Running (err=<nil>)
	I0314 18:33:00.667490  967674 status.go:257] ha-105786 status: &{Name:ha-105786 Host:Running Kubelet:Running APIServer:Stopped Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:33:00.667540  967674 status.go:255] checking status of ha-105786-m02 ...
	I0314 18:33:00.667867  967674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:33:00.667908  967674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:33:00.684049  967674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33631
	I0314 18:33:00.684542  967674 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:33:00.685081  967674 main.go:141] libmachine: Using API Version  1
	I0314 18:33:00.685113  967674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:33:00.685398  967674 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:33:00.685609  967674 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:33:00.687237  967674 status.go:330] ha-105786-m02 host status = "Running" (err=<nil>)
	I0314 18:33:00.687258  967674 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:33:00.687600  967674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:33:00.687637  967674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:33:00.703072  967674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37645
	I0314 18:33:00.703480  967674 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:33:00.703922  967674 main.go:141] libmachine: Using API Version  1
	I0314 18:33:00.703938  967674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:33:00.704282  967674 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:33:00.704465  967674 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:33:00.707330  967674 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:33:00.707781  967674 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:30:04 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:33:00.707808  967674 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:33:00.707931  967674 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:33:00.708241  967674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:33:00.708285  967674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:33:00.723370  967674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40121
	I0314 18:33:00.723766  967674 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:33:00.724190  967674 main.go:141] libmachine: Using API Version  1
	I0314 18:33:00.724230  967674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:33:00.724570  967674 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:33:00.724780  967674 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:33:00.724968  967674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:33:00.724993  967674 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:33:00.727694  967674 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:33:00.728151  967674 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:30:04 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:33:00.728189  967674 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:33:00.728310  967674 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:33:00.728485  967674 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:33:00.728629  967674 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:33:00.728754  967674 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	I0314 18:33:00.814580  967674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:33:00.831840  967674 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:33:00.831868  967674 api_server.go:166] Checking apiserver status ...
	I0314 18:33:00.831902  967674 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:33:00.846640  967674 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1523/cgroup
	W0314 18:33:00.857687  967674 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1523/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:33:00.857736  967674 ssh_runner.go:195] Run: ls
	I0314 18:33:00.862635  967674 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:33:05.863186  967674 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 18:33:05.863249  967674 retry.go:31] will retry after 189.624823ms: state is "Stopped"
	I0314 18:33:06.053593  967674 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:33:11.054442  967674 api_server.go:269] stopped: https://192.168.39.254:8443/healthz: Get "https://192.168.39.254:8443/healthz": context deadline exceeded (Client.Timeout exceeded while awaiting headers)
	I0314 18:33:11.054497  967674 retry.go:31] will retry after 313.517233ms: state is "Stopped"
	I0314 18:33:11.369005  967674 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:33:14.397791  967674 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:33:14.397825  967674 status.go:422] ha-105786-m02 apiserver status = Running (err=<nil>)
	I0314 18:33:14.397842  967674 status.go:257] ha-105786-m02 status: &{Name:ha-105786-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:33:14.397860  967674 status.go:255] checking status of ha-105786-m04 ...
	I0314 18:33:14.398191  967674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:33:14.398227  967674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:33:14.414655  967674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34603
	I0314 18:33:14.415181  967674 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:33:14.415736  967674 main.go:141] libmachine: Using API Version  1
	I0314 18:33:14.415762  967674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:33:14.416155  967674 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:33:14.416410  967674 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:33:14.418162  967674 status.go:330] ha-105786-m04 host status = "Running" (err=<nil>)
	I0314 18:33:14.418178  967674 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:33:14.418455  967674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:33:14.418492  967674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:33:14.435657  967674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37327
	I0314 18:33:14.436074  967674 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:33:14.436579  967674 main.go:141] libmachine: Using API Version  1
	I0314 18:33:14.436608  967674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:33:14.436944  967674 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:33:14.437143  967674 main.go:141] libmachine: (ha-105786-m04) Calling .GetIP
	I0314 18:33:14.439876  967674 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:33:14.440279  967674 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:33:14.440308  967674 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:33:14.440431  967674 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:33:14.440741  967674 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:33:14.440783  967674 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:33:14.455857  967674 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45221
	I0314 18:33:14.456286  967674 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:33:14.456818  967674 main.go:141] libmachine: Using API Version  1
	I0314 18:33:14.456840  967674 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:33:14.457130  967674 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:33:14.457328  967674 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:33:14.457496  967674 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:33:14.457521  967674 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:33:14.460088  967674 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:33:14.460515  967674 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:33:14.460553  967674 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:33:14.460681  967674 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:33:14.460832  967674 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:33:14.460987  967674 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:33:14.461105  967674 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:33:14.544950  967674 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:33:14.562501  967674 status.go:257] ha-105786-m04 status: &{Name:ha-105786-m04 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
ha_test.go:495: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr" : exit status 2
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-105786 -n ha-105786
helpers_test.go:244: <<< TestMutliControlPlane/serial/DeleteSecondaryNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/DeleteSecondaryNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-105786 logs -n 25: (1.904189174s)
helpers_test.go:252: TestMutliControlPlane/serial/DeleteSecondaryNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m02 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04:/home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m04 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp testdata/cp-test.txt                                                | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile3116594682/001/cp-test_ha-105786-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786:/home/docker/cp-test_ha-105786-m04_ha-105786.txt                       |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786 sudo cat                                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786.txt                                 |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m02:/home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m02 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03:/home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m03 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-105786 node stop m02 -v=7                                                     | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-105786 node start m02 -v=7                                                    | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-105786 -v=7                                                           | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-105786 -v=7                                                                | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-105786 --wait=true -v=7                                                    | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:28 UTC | 14 Mar 24 18:32 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-105786                                                                | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC |                     |
	| node    | ha-105786 node delete m03 -v=7                                                   | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC | 14 Mar 24 18:32 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:28:14
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:28:14.135581  966381 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:28:14.135809  966381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:28:14.135819  966381 out.go:304] Setting ErrFile to fd 2...
	I0314 18:28:14.135823  966381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:28:14.136017  966381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:28:14.136635  966381 out.go:298] Setting JSON to false
	I0314 18:28:14.137555  966381 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":94246,"bootTime":1710346648,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:28:14.137623  966381 start.go:139] virtualization: kvm guest
	I0314 18:28:14.140151  966381 out.go:177] * [ha-105786] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:28:14.141865  966381 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:28:14.143139  966381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:28:14.141950  966381 notify.go:220] Checking for updates...
	I0314 18:28:14.145606  966381 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:28:14.146880  966381 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:28:14.148172  966381 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:28:14.149433  966381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:28:14.151456  966381 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:28:14.151545  966381 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:28:14.151938  966381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:28:14.152008  966381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:28:14.167584  966381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33923
	I0314 18:28:14.167994  966381 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:28:14.168545  966381 main.go:141] libmachine: Using API Version  1
	I0314 18:28:14.168568  966381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:28:14.168939  966381 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:28:14.169128  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:28:14.204576  966381 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 18:28:14.205886  966381 start.go:297] selected driver: kvm2
	I0314 18:28:14.205904  966381 start.go:901] validating driver "kvm2" against &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:28:14.206073  966381 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:28:14.206444  966381 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:28:14.206518  966381 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 18:28:14.221472  966381 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 18:28:14.222163  966381 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:28:14.222234  966381 cni.go:84] Creating CNI manager for ""
	I0314 18:28:14.222247  966381 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0314 18:28:14.222320  966381 start.go:340] cluster config:
	{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:28:14.222495  966381 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:28:14.224385  966381 out.go:177] * Starting "ha-105786" primary control-plane node in "ha-105786" cluster
	I0314 18:28:14.225804  966381 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:28:14.225844  966381 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 18:28:14.225865  966381 cache.go:56] Caching tarball of preloaded images
	I0314 18:28:14.225953  966381 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:28:14.225968  966381 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:28:14.226092  966381 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:28:14.226284  966381 start.go:360] acquireMachinesLock for ha-105786: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:28:14.226328  966381 start.go:364] duration metric: took 24.57µs to acquireMachinesLock for "ha-105786"
	I0314 18:28:14.226352  966381 start.go:96] Skipping create...Using existing machine configuration
	I0314 18:28:14.226360  966381 fix.go:54] fixHost starting: 
	I0314 18:28:14.226657  966381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:28:14.226701  966381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:28:14.241136  966381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38977
	I0314 18:28:14.241530  966381 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:28:14.241978  966381 main.go:141] libmachine: Using API Version  1
	I0314 18:28:14.242009  966381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:28:14.242325  966381 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:28:14.242572  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:28:14.242703  966381 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:28:14.244394  966381 fix.go:112] recreateIfNeeded on ha-105786: state=Running err=<nil>
	W0314 18:28:14.244429  966381 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 18:28:14.246194  966381 out.go:177] * Updating the running kvm2 "ha-105786" VM ...
	I0314 18:28:14.247325  966381 machine.go:94] provisionDockerMachine start ...
	I0314 18:28:14.247347  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:28:14.247553  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.250303  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.250837  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.250860  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.251033  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.251208  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.251368  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.251526  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.251694  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.251915  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.251931  966381 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:28:14.358280  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786
	
	I0314 18:28:14.358312  966381 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:28:14.358557  966381 buildroot.go:166] provisioning hostname "ha-105786"
	I0314 18:28:14.358582  966381 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:28:14.358772  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.361558  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.362012  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.362038  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.362163  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.362358  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.362546  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.362671  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.362833  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.363062  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.363078  966381 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-105786 && echo "ha-105786" | sudo tee /etc/hostname
	I0314 18:28:14.484069  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786
	
	I0314 18:28:14.484119  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.487433  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.487941  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.487983  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.488155  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.488354  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.488522  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.488656  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.488852  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.489074  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.489104  966381 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-105786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-105786/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-105786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:28:14.594587  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:28:14.594624  966381 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:28:14.594655  966381 buildroot.go:174] setting up certificates
	I0314 18:28:14.594666  966381 provision.go:84] configureAuth start
	I0314 18:28:14.594676  966381 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:28:14.594943  966381 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:28:14.597572  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.598001  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.598034  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.598168  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.600676  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.601049  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.601074  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.601190  966381 provision.go:143] copyHostCerts
	I0314 18:28:14.601228  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:28:14.601277  966381 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 18:28:14.601287  966381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:28:14.601369  966381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:28:14.601478  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:28:14.601510  966381 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 18:28:14.601520  966381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:28:14.601557  966381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:28:14.601636  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:28:14.601666  966381 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 18:28:14.601679  966381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:28:14.601714  966381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:28:14.601795  966381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.ha-105786 san=[127.0.0.1 192.168.39.170 ha-105786 localhost minikube]
	I0314 18:28:14.793407  966381 provision.go:177] copyRemoteCerts
	I0314 18:28:14.793506  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:28:14.793541  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.796538  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.796966  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.796996  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.797178  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.797386  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.797602  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.797798  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:28:14.880507  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 18:28:14.880595  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:28:14.909434  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 18:28:14.909498  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0314 18:28:14.938304  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 18:28:14.938363  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 18:28:14.967874  966381 provision.go:87] duration metric: took 373.192835ms to configureAuth
	I0314 18:28:14.967907  966381 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:28:14.968172  966381 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:28:14.968305  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.970873  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.971322  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.971347  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.971576  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.971817  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.972020  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.972248  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.972453  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.972619  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.972633  966381 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:29:45.881390  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:29:45.881434  966381 machine.go:97] duration metric: took 1m31.634090272s to provisionDockerMachine
	I0314 18:29:45.881454  966381 start.go:293] postStartSetup for "ha-105786" (driver="kvm2")
	I0314 18:29:45.881471  966381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:29:45.881500  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:45.881876  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:29:45.881911  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:45.885484  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:45.886080  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:45.886109  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:45.886288  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:45.886498  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:45.886677  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:45.886828  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:29:45.969077  966381 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:29:45.973787  966381 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:29:45.973816  966381 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:29:45.973920  966381 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:29:45.973999  966381 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 18:29:45.974015  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /etc/ssl/certs/9513112.pem
	I0314 18:29:45.974096  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:29:45.984935  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:29:46.011260  966381 start.go:296] duration metric: took 129.788525ms for postStartSetup
	I0314 18:29:46.011314  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.011643  966381 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0314 18:29:46.011670  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.014882  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.015320  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.015348  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.015461  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.015649  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.015818  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.015958  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	W0314 18:29:46.095015  966381 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0314 18:29:46.095038  966381 fix.go:56] duration metric: took 1m31.868679135s for fixHost
	I0314 18:29:46.095062  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.097850  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.098240  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.098263  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.098402  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.098618  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.098810  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.098954  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.099135  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:29:46.099304  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:29:46.099313  966381 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:29:46.201375  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440986.158036020
	
	I0314 18:29:46.201405  966381 fix.go:216] guest clock: 1710440986.158036020
	I0314 18:29:46.201414  966381 fix.go:229] Guest: 2024-03-14 18:29:46.15803602 +0000 UTC Remote: 2024-03-14 18:29:46.095045686 +0000 UTC m=+92.011661084 (delta=62.990334ms)
	I0314 18:29:46.201440  966381 fix.go:200] guest clock delta is within tolerance: 62.990334ms
	I0314 18:29:46.201447  966381 start.go:83] releasing machines lock for "ha-105786", held for 1m31.975109616s
	I0314 18:29:46.201474  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.201805  966381 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:29:46.204592  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.205008  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.205040  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.205187  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.205819  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.205986  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.206081  966381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:29:46.206122  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.206196  966381 ssh_runner.go:195] Run: cat /version.json
	I0314 18:29:46.206230  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.209016  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.209269  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.209428  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.209454  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.209632  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.209841  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.209848  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.209879  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.210105  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.210118  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.210303  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:29:46.210337  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.210512  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.210654  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:29:46.286133  966381 ssh_runner.go:195] Run: systemctl --version
	I0314 18:29:46.314007  966381 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:29:46.480684  966381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:29:46.491962  966381 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:29:46.492020  966381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:29:46.502226  966381 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 18:29:46.502257  966381 start.go:494] detecting cgroup driver to use...
	I0314 18:29:46.502322  966381 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:29:46.518806  966381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:29:46.533532  966381 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:29:46.533603  966381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:29:46.547594  966381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:29:46.562640  966381 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:29:46.748094  966381 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:29:46.913423  966381 docker.go:233] disabling docker service ...
	I0314 18:29:46.913498  966381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:29:46.935968  966381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:29:46.951541  966381 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:29:47.101123  966381 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:29:47.251805  966381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:29:47.268510  966381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:29:47.289410  966381 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:29:47.289473  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.300842  966381 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:29:47.300915  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.311901  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.322706  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.333403  966381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:29:47.344317  966381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:29:47.353903  966381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:29:47.363435  966381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:29:47.507378  966381 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:29:50.656435  966381 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.149013693s)
	I0314 18:29:50.656479  966381 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:29:50.656556  966381 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:29:50.663052  966381 start.go:562] Will wait 60s for crictl version
	I0314 18:29:50.663110  966381 ssh_runner.go:195] Run: which crictl
	I0314 18:29:50.667345  966381 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:29:50.709178  966381 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:29:50.709266  966381 ssh_runner.go:195] Run: crio --version
	I0314 18:29:50.740672  966381 ssh_runner.go:195] Run: crio --version
	I0314 18:29:50.776822  966381 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:29:50.778198  966381 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:29:50.781287  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:50.781657  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:50.781682  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:50.781874  966381 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:29:50.787094  966381 kubeadm.go:877] updating cluster {Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:29:50.787242  966381 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:29:50.787286  966381 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:29:50.836424  966381 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:29:50.836452  966381 crio.go:415] Images already preloaded, skipping extraction
	I0314 18:29:50.836534  966381 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:29:50.883409  966381 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:29:50.883436  966381 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:29:50.883446  966381 kubeadm.go:928] updating node { 192.168.39.170 8443 v1.28.4 crio true true} ...
	I0314 18:29:50.883558  966381 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-105786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:29:50.883631  966381 ssh_runner.go:195] Run: crio config
	I0314 18:29:50.940266  966381 cni.go:84] Creating CNI manager for ""
	I0314 18:29:50.940294  966381 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0314 18:29:50.940311  966381 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:29:50.940343  966381 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-105786 NodeName:ha-105786 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:29:50.940523  966381 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-105786"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:29:50.940545  966381 kube-vip.go:105] generating kube-vip config ...
	I0314 18:29:50.940622  966381 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:29:50.940691  966381 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:29:50.952135  966381 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:29:50.952199  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0314 18:29:50.962147  966381 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0314 18:29:50.980512  966381 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:29:50.999194  966381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0314 18:29:51.017990  966381 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0314 18:29:51.037979  966381 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:29:51.042188  966381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:29:51.223538  966381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:29:51.259248  966381 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786 for IP: 192.168.39.170
	I0314 18:29:51.259276  966381 certs.go:194] generating shared ca certs ...
	I0314 18:29:51.259299  966381 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:29:51.259518  966381 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:29:51.259558  966381 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:29:51.259574  966381 certs.go:256] generating profile certs ...
	I0314 18:29:51.259649  966381 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key
	I0314 18:29:51.259676  966381 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054
	I0314 18:29:51.259692  966381 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170 192.168.39.245 192.168.39.190 192.168.39.254]
	I0314 18:29:51.368068  966381 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054 ...
	I0314 18:29:51.368106  966381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054: {Name:mk104c6891f4c562b4c5c1e2fd4fbf7ab8a19f7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:29:51.368317  966381 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054 ...
	I0314 18:29:51.368336  966381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054: {Name:mk5452416f251a959745d5afed1f6504eb414193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:29:51.368435  966381 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt
	I0314 18:29:51.368581  966381 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key
	I0314 18:29:51.368708  966381 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key
	I0314 18:29:51.368724  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:29:51.368742  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:29:51.368755  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:29:51.368765  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:29:51.368785  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:29:51.368797  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:29:51.368818  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:29:51.368830  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:29:51.368880  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 18:29:51.368907  966381 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 18:29:51.368918  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:29:51.368936  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:29:51.368955  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:29:51.368982  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:29:51.369020  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:29:51.369046  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.369061  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem -> /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.369070  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.369722  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:29:51.397901  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:29:51.424922  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:29:51.451188  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:29:51.476429  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 18:29:51.502245  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 18:29:51.528392  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:29:51.556131  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 18:29:51.582461  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:29:51.616725  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 18:29:51.644959  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 18:29:51.672417  966381 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:29:51.691534  966381 ssh_runner.go:195] Run: openssl version
	I0314 18:29:51.699328  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:29:51.711530  966381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.716524  966381 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.716584  966381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.722737  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:29:51.734143  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 18:29:51.746609  966381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.751712  966381 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.751761  966381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.758096  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 18:29:51.769099  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 18:29:51.781745  966381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.786840  966381 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.786914  966381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.793202  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:29:51.804099  966381 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:29:51.809375  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 18:29:51.815416  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 18:29:51.821550  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 18:29:51.827643  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 18:29:51.833646  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 18:29:51.839821  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 18:29:51.845930  966381 kubeadm.go:391] StartCluster: {Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:29:51.846070  966381 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 18:29:51.846123  966381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 18:29:51.887584  966381 cri.go:89] found id: "f7e5ce54c582966673026ea5da16491d518cf3c3aaeee9a7ceb0f7b5a1372c5f"
	I0314 18:29:51.887615  966381 cri.go:89] found id: "2f4ac879a201f1cf19940c5c6d0a391e018b4fb1e3238c5e18f108de7dfe9d49"
	I0314 18:29:51.887619  966381 cri.go:89] found id: "c1c9118d7d57dfe1750090185c762b93b050f71cdc3e71a82977bc49623be966"
	I0314 18:29:51.887621  966381 cri.go:89] found id: "ce25ae74cf40b4fa581afd2062a90e823cb3fa2657388e63ce08248941a6fad4"
	I0314 18:29:51.887624  966381 cri.go:89] found id: "ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675"
	I0314 18:29:51.887627  966381 cri.go:89] found id: "b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b"
	I0314 18:29:51.887630  966381 cri.go:89] found id: "4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817"
	I0314 18:29:51.887632  966381 cri.go:89] found id: "fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d"
	I0314 18:29:51.887634  966381 cri.go:89] found id: "50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9"
	I0314 18:29:51.887640  966381 cri.go:89] found id: "3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2"
	I0314 18:29:51.887642  966381 cri.go:89] found id: "dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183"
	I0314 18:29:51.887644  966381 cri.go:89] found id: "ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922"
	I0314 18:29:51.887646  966381 cri.go:89] found id: "ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc"
	I0314 18:29:51.887650  966381 cri.go:89] found id: ""
	I0314 18:29:51.887692  966381 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.262967633Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710441195262940789,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a0b61a67-eeef-4fb6-ad4f-d52b5addc639 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.263833767Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=00775da4-f416-4b5a-8b13-82ac08e7b5cf name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.263942572Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=00775da4-f416-4b5a-8b13-82ac08e7b5cf name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.264337470Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710440999492513300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710440998498516832,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c
889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942,PodSandboxId:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440997796066717,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710440998481172031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710440998251923020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf
6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kuber
netes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710440496387804188,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubern
etes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346674843863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346689801526,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710440342438475307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710440320771301059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710440320556512066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=00775da4-f416-4b5a-8b13-82ac08e7b5cf name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.319254049Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=01a76701-d4a7-4821-a8ee-ea474938838c name=/runtime.v1.RuntimeService/Version
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.319391721Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=01a76701-d4a7-4821-a8ee-ea474938838c name=/runtime.v1.RuntimeService/Version
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.327842243Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f87b0661-acd4-45c7-8549-a904ed1d7ce7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.328670867Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710441195328637838,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f87b0661-acd4-45c7-8549-a904ed1d7ce7 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.329917901Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7e2b3b54-8ae9-417d-96a3-b1593f1eb17b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.330207987Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7e2b3b54-8ae9-417d-96a3-b1593f1eb17b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.332509122Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710440999492513300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710440998498516832,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c
889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942,PodSandboxId:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440997796066717,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710440998481172031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710440998251923020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf
6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kuber
netes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710440496387804188,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubern
etes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346674843863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346689801526,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710440342438475307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710440320771301059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710440320556512066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7e2b3b54-8ae9-417d-96a3-b1593f1eb17b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.381375013Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=662e868e-a822-4959-84b5-2eaa2316e6b6 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.381451011Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=662e868e-a822-4959-84b5-2eaa2316e6b6 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.383149063Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a941d076-4f43-40fe-bff4-a36ecbbbec1d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.383756629Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710441195383672711,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a941d076-4f43-40fe-bff4-a36ecbbbec1d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.384423305Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=52ca0dd0-2575-4ec5-b9f5-b53eeea5263f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.384643640Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=52ca0dd0-2575-4ec5-b9f5-b53eeea5263f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.385098272Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710440999492513300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710440998498516832,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c
889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942,PodSandboxId:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440997796066717,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710440998481172031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710440998251923020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf
6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kuber
netes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710440496387804188,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubern
etes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346674843863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346689801526,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710440342438475307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710440320771301059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710440320556512066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=52ca0dd0-2575-4ec5-b9f5-b53eeea5263f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.433457979Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4996d5c7-c86c-4cd8-a6c3-2df80494446f name=/runtime.v1.RuntimeService/Version
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.433553040Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4996d5c7-c86c-4cd8-a6c3-2df80494446f name=/runtime.v1.RuntimeService/Version
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.435013390Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=00db353c-16e0-4be7-b9ba-8268f4936dcd name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.435440428Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710441195435418699,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=00db353c-16e0-4be7-b9ba-8268f4936dcd name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.436017376Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e235af75-6b17-4db8-b6ca-cd1f30fc8567 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.436082727Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e235af75-6b17-4db8-b6ca-cd1f30fc8567 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:33:15 ha-105786 crio[4214]: time="2024-03-14 18:33:15.436461426Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710440999492513300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710440998498516832,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c
889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942,PodSandboxId:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440997796066717,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710440998481172031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710440998251923020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf
6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kuber
netes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710440496387804188,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubern
etes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346674843863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346689801526,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710440342438475307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710440320771301059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710440320556512066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e235af75-6b17-4db8-b6ca-cd1f30fc8567 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9604ba67bf9c8       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      2 minutes ago       Running             kindnet-cni               3                   219b0738aa779       kindnet-9b2pr
	3bb0081eb2a08       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      2 minutes ago       Running             storage-provisioner       4                   01cedb7ac2289       storage-provisioner
	e23ea5e28f6e5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      2 minutes ago       Running             kube-controller-manager   2                   dba584eb3e199       kube-controller-manager-ha-105786
	73d5356f4c557       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      2 minutes ago       Running             kube-apiserver            3                   ca368554eaafd       kube-apiserver-ha-105786
	3704dc6ef5119       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      2 minutes ago       Running             busybox                   1                   690093c450be8       busybox-5b5d89c9d6-4h99c
	ae7c1c8fbe250       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Exited              storage-provisioner       3                   01cedb7ac2289       storage-provisioner
	dbe0af9dcb333       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   6fd8ebd3d1377       kube-proxy-hd8mx
	25d7b21ffe66f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   55727b536f419       coredns-5dd5756b68-cx8rc
	a3c889282a698       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   be55e1f9341f6       coredns-5dd5756b68-jsddl
	6d30bfdc11c1c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Exited              kindnet-cni               2                   219b0738aa779       kindnet-9b2pr
	28469939f60bd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Exited              kube-controller-manager   1                   dba584eb3e199       kube-controller-manager-ha-105786
	4269a70d03936       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   3374ed1f059b9       kube-scheduler-ha-105786
	782768ee692d7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Exited              kube-apiserver            2                   ca368554eaafd       kube-apiserver-ha-105786
	56ede7d89c5f7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   22995c012df03       etcd-ha-105786
	ded817e115254       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      3 minutes ago       Exited              kube-vip                  8                   d61621e3b64fc       kube-vip-ha-105786
	522fa7bdb84ee       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   11 minutes ago      Exited              busybox                   0                   c09b6e29d418a       busybox-5b5d89c9d6-4h99c
	b538852248364       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 minutes ago      Exited              coredns                   0                   c6041a600821e       coredns-5dd5756b68-cx8rc
	4fbdd8b34ac46       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      14 minutes ago      Exited              coredns                   0                   880e93f2a3ed5       coredns-5dd5756b68-jsddl
	50a3dcdc83e53       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      14 minutes ago      Exited              kube-proxy                0                   6d43c44b3e99b       kube-proxy-hd8mx
	3f27ba9bd31a4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      14 minutes ago      Exited              kube-scheduler            0                   ed2bf5bc80b8e       kube-scheduler-ha-105786
	ff7528019bad0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      14 minutes ago      Exited              etcd                      0                   c3fe1175987df       etcd-ha-105786
	
	
	==> coredns [25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34707 - 24413 "HINFO IN 4449984729202792723.1825095687933891679. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008723816s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54512->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817] <==
	[INFO] 10.244.0.4:39798 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049996s
	[INFO] 10.244.0.4:39218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088514s
	[INFO] 10.244.2.2:53227 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129249s
	[INFO] 10.244.1.2:38289 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162454s
	[INFO] 10.244.1.2:39880 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157216s
	[INFO] 10.244.0.4:40457 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166755s
	[INFO] 10.244.0.4:47654 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165231s
	[INFO] 10.244.2.2:56922 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021872s
	[INFO] 10.244.2.2:55729 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082709s
	[INFO] 10.244.2.2:40076 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091316s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1530&timeout=6m55s&timeoutSeconds=415&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1535&timeout=8m55s&timeoutSeconds=535&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[86211054]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:00.833) (total time: 12284ms):
	Trace[86211054]: ---"Objects listed" error:Unauthorized 12284ms (18:28:13.118)
	Trace[86211054]: [12.284846568s] [12.284846568s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[112395228]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:01.095) (total time: 12023ms):
	Trace[112395228]: ---"Objects listed" error:Unauthorized 12023ms (18:28:13.118)
	Trace[112395228]: [12.023135648s] [12.023135648s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a3c889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44600 - 56811 "HINFO IN 296503675936248183.3736117151437012322. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006872615s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43650->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b] <==
	[INFO] 10.244.2.2:35209 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000219373s
	[INFO] 10.244.1.2:37537 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135802s
	[INFO] 10.244.1.2:50389 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105335s
	[INFO] 10.244.0.4:53486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200332s
	[INFO] 10.244.0.4:53550 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000308188s
	[INFO] 10.244.2.2:59521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134191s
	[INFO] 10.244.1.2:43514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127501s
	[INFO] 10.244.1.2:54638 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089684s
	[INFO] 10.244.1.2:43811 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187377s
	[INFO] 10.244.1.2:38538 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164864s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1535&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1529&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[601454333]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:00.779) (total time: 12338ms):
	Trace[601454333]: ---"Objects listed" error:Unauthorized 12338ms (18:28:13.117)
	Trace[601454333]: [12.338439408s] [12.338439408s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1851486924]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:00.685) (total time: 12432ms):
	Trace[1851486924]: ---"Objects listed" error:Unauthorized 12432ms (18:28:13.118)
	Trace[1851486924]: [12.432164604s] [12.432164604s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-105786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_18_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:33:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:19:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-105786
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 83805f81be844e0c8f423f0d34e721b6
	  System UUID:                83805f81-be84-4e0c-8f42-3f0d34e721b6
	  Boot ID:                    592e9c66-43d6-494c-b6d9-c848f3c684fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4h99c             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 coredns-5dd5756b68-cx8rc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 coredns-5dd5756b68-jsddl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     14m
	  kube-system                 etcd-ha-105786                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         14m
	  kube-system                 kindnet-9b2pr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      14m
	  kube-system                 kube-apiserver-ha-105786             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-controller-manager-ha-105786    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-proxy-hd8mx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-scheduler-ha-105786             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 kube-vip-ha-105786                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 2m30s                  kube-proxy       
	  Normal   Starting                 14m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m (x8 over 14m)      kubelet          Node ha-105786 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     14m (x7 over 14m)      kubelet          Node ha-105786 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    14m (x8 over 14m)      kubelet          Node ha-105786 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  14m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 14m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  14m                    kubelet          Node ha-105786 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    14m                    kubelet          Node ha-105786 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     14m                    kubelet          Node ha-105786 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           14m                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   NodeReady                14m                    kubelet          Node ha-105786 status is now: NodeReady
	  Normal   RegisteredNode           12m                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   RegisteredNode           11m                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Warning  ContainerGCFailed        3m25s (x2 over 4m25s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           2m22s                  node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   RegisteredNode           2m20s                  node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   RegisteredNode           82s                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	
	
	Name:               ha-105786-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_20_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:19:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:33:08 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-105786-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d19ca741ee10483194e2397e40db9727
	  System UUID:                d19ca741-ee10-4831-94e2-397e40db9727
	  Boot ID:                    1fd6919e-b89c-4ad1-b096-54490f7c15ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-k6gxp                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	  kube-system                 etcd-ha-105786-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         13m
	  kube-system                 kindnet-vpgvl                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-apiserver-ha-105786-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-controller-manager-ha-105786-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-proxy-qpz89                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-scheduler-ha-105786-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	  kube-system                 kube-vip-ha-105786-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 13m                    kube-proxy       
	  Normal  Starting                 2m16s                  kube-proxy       
	  Normal  RegisteredNode           12m                    node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  RegisteredNode           11m                    node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  NodeNotReady             9m51s                  node-controller  Node ha-105786-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    2m58s (x8 over 2m58s)  kubelet          Node ha-105786-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 2m58s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  2m58s (x8 over 2m58s)  kubelet          Node ha-105786-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     2m58s (x7 over 2m58s)  kubelet          Node ha-105786-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  2m58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           2m22s                  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  RegisteredNode           2m20s                  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  RegisteredNode           82s                    node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	
	
	Name:               ha-105786-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_22_12_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:22:12 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:32:38 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:32:18 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:32:18 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:32:18 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:32:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    ha-105786-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e09570d6bc045a59dfec434fd490a91
	  System UUID:                7e09570d-6bc0-45a5-9dfe-c434fd490a91
	  Boot ID:                    ebe73455-63b4-449d-a759-d50e720d4746
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-sft2w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         48s
	  kube-system                 kindnet-fzjdr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      11m
	  kube-system                 kube-proxy-bftws            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         11m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                  From             Message
	  ----     ------                   ----                 ----             -------
	  Normal   Starting                 10m                  kube-proxy       
	  Normal   Starting                 54s                  kube-proxy       
	  Normal   RegisteredNode           11m                  node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   RegisteredNode           11m                  node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   NodeNotReady             10m                  node-controller  Node ha-105786-m04 status is now: NodeNotReady
	  Normal   NodeReady                8m52s (x2 over 10m)  kubelet          Node ha-105786-m04 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  8m52s (x6 over 11m)  kubelet          Node ha-105786-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    8m52s (x6 over 11m)  kubelet          Node ha-105786-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     8m52s (x6 over 11m)  kubelet          Node ha-105786-m04 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           2m23s                node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   RegisteredNode           2m21s                node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   NodeNotReady             103s                 node-controller  Node ha-105786-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           83s                  node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   Starting                 58s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  58s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  58s (x2 over 58s)    kubelet          Node ha-105786-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    58s (x2 over 58s)    kubelet          Node ha-105786-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     58s (x2 over 58s)    kubelet          Node ha-105786-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 58s                  kubelet          Node ha-105786-m04 has been rebooted, boot id: ebe73455-63b4-449d-a759-d50e720d4746
	  Normal   NodeReady                58s                  kubelet          Node ha-105786-m04 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.323219] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.062088] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057119] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.191786] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.127293] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.261879] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +5.345908] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.065032] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.795309] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.848496] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.157868] kauditd_printk_skb: 51 callbacks suppressed
	[  +2.914153] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[Mar14 18:19] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.437406] kauditd_printk_skb: 73 callbacks suppressed
	[Mar14 18:29] systemd-fstab-generator[4133]: Ignoring "noauto" option for root device
	[  +0.177931] systemd-fstab-generator[4145]: Ignoring "noauto" option for root device
	[  +0.190681] systemd-fstab-generator[4159]: Ignoring "noauto" option for root device
	[  +0.144149] systemd-fstab-generator[4171]: Ignoring "noauto" option for root device
	[  +0.260798] systemd-fstab-generator[4195]: Ignoring "noauto" option for root device
	[  +3.681548] systemd-fstab-generator[4302]: Ignoring "noauto" option for root device
	[  +6.707178] kauditd_printk_skb: 127 callbacks suppressed
	[Mar14 18:30] kauditd_printk_skb: 88 callbacks suppressed
	[ +27.847589] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.765007] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659] <==
	{"level":"info","ts":"2024-03-14T18:31:34.698048Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:31:34.705084Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b385368e7357343","to":"49a9455a573a24bd","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-14T18:31:34.705143Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:31:34.713654Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.190:55874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-03-14T18:31:34.744598Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"49a9455a573a24bd","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-14T18:31:34.744677Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"49a9455a573a24bd","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-14T18:31:41.235943Z","caller":"traceutil/trace.go:171","msg":"trace[1249088434] transaction","detail":"{read_only:false; response_revision:1970; number_of_response:1; }","duration":"103.823426ms","start":"2024-03-14T18:31:41.132016Z","end":"2024-03-14T18:31:41.235839Z","steps":["trace[1249088434] 'process raft request'  (duration: 103.664046ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:31:41.235959Z","caller":"traceutil/trace.go:171","msg":"trace[246627116] transaction","detail":"{read_only:false; response_revision:1969; number_of_response:1; }","duration":"110.52398ms","start":"2024-03-14T18:31:41.125415Z","end":"2024-03-14T18:31:41.235939Z","steps":["trace[246627116] 'process raft request'  (duration: 110.04873ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:32:32.288844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 switched to configuration voters=(6850040302934775540 7726016870774829891)"}
	{"level":"info","ts":"2024-03-14T18:32:32.289343Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"73eff271b33bb37a","local-member-id":"6b385368e7357343","removed-remote-peer-id":"49a9455a573a24bd","removed-remote-peer-urls":["https://192.168.39.190:2380"]}
	{"level":"info","ts":"2024-03-14T18:32:32.289476Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:32:32.289964Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:32:32.290046Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:32:32.290411Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:32:32.290472Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:32:32.290572Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:32:32.290852Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd","error":"context canceled"}
	{"level":"warn","ts":"2024-03-14T18:32:32.290931Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"49a9455a573a24bd","error":"failed to read 49a9455a573a24bd on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-14T18:32:32.290971Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:32:32.291084Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd","error":"context canceled"}
	{"level":"info","ts":"2024-03-14T18:32:32.291104Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:32:32.291125Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:32:32.291137Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6b385368e7357343","removed-remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:32:32.302551Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.190:54328","server-name":"","error":"read tcp 192.168.39.170:2380->192.168.39.190:54328: read: connection reset by peer"}
	{"level":"warn","ts":"2024-03-14T18:32:32.30751Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.190:54340","server-name":"","error":"EOF"}
	
	
	==> etcd [ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc] <==
	{"level":"info","ts":"2024-03-14T18:28:15.122911Z","caller":"traceutil/trace.go:171","msg":"trace[256093788] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; }","duration":"8.014776074s","start":"2024-03-14T18:28:07.108124Z","end":"2024-03-14T18:28:15.122901Z","steps":["trace[256093788] 'agreement among raft nodes before linearized reading'  (duration: 8.009384348s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:28:15.122939Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:28:07.108122Z","time spent":"8.014802071s","remote":"127.0.0.1:48602","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" limit:10000 "}
	WARNING: 2024/03/14 18:28:15 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-14T18:28:15.120149Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:28:07.116653Z","time spent":"8.003484208s","remote":"127.0.0.1:48846","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:10000 "}
	WARNING: 2024/03/14 18:28:15 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-14T18:28:15.184325Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.170:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T18:28:15.184437Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.170:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-14T18:28:15.184672Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6b385368e7357343","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-14T18:28:15.185082Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185145Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185176Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.18525Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185323Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185356Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185366Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185371Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185379Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185424Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185472Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185596Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185822Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185872Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.190126Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.170:2380"}
	{"level":"info","ts":"2024-03-14T18:28:15.190236Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.170:2380"}
	{"level":"info","ts":"2024-03-14T18:28:15.190247Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-105786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.170:2380"],"advertise-client-urls":["https://192.168.39.170:2379"]}
	
	
	==> kernel <==
	 18:33:16 up 15 min,  0 users,  load average: 0.62, 0.48, 0.29
	Linux ha-105786 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b] <==
	I0314 18:29:59.107185       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0314 18:30:02.316124       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0314 18:30:05.389271       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0314 18:30:16.399467       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0314 18:30:23.820510       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.15:54244->10.96.0.1:443: read: connection reset by peer
	I0314 18:30:26.896814       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad] <==
	I0314 18:32:43.440338       1 main.go:227] handling current node
	I0314 18:32:43.440352       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:32:43.440361       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:32:43.440500       1 main.go:223] Handling node with IPs: map[192.168.39.190:{}]
	I0314 18:32:43.440544       1 main.go:250] Node ha-105786-m03 has CIDR [10.244.2.0/24] 
	I0314 18:32:43.440627       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:32:43.440636       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:32:53.457542       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:32:53.457591       1 main.go:227] handling current node
	I0314 18:32:53.457601       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:32:53.457608       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:32:53.457843       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:32:53.457852       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:33:03.490108       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:33:03.490189       1 main.go:227] handling current node
	I0314 18:33:03.490213       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:33:03.490230       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:33:03.490344       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:33:03.490363       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:33:13.506282       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:33:13.506454       1 main.go:227] handling current node
	I0314 18:33:13.506487       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:33:13.506513       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:33:13.506766       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:33:13.506818       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f] <==
	E0314 18:30:41.612319       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0314 18:30:41.612403       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	I0314 18:30:41.613271       1 shared_informer.go:318] Caches are synced for configmaps
	E0314 18:30:41.613373       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	I0314 18:30:41.613420       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 18:30:41.613426       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0314 18:30:41.613526       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0314 18:30:41.619858       1 timeout.go:142] post-timeout activity - time-elapsed: 8.755378ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result: <nil>
	I0314 18:30:41.628820       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0314 18:30:41.647391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.190]
	I0314 18:30:41.648795       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 18:30:41.653582       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 18:30:41.653682       1 aggregator.go:166] initial CRD sync complete...
	I0314 18:30:41.653783       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 18:30:41.653807       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 18:30:41.653831       1 cache.go:39] Caches are synced for autoregister controller
	I0314 18:30:41.671104       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0314 18:30:41.690757       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0314 18:30:42.509110       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0314 18:30:42.924430       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.170 192.168.39.190 192.168.39.245]
	I0314 18:31:33.971872       1 trace.go:236] Trace[27322001]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:3a44bd88-72c0-4501-b4d7-81437002d596,client:192.168.39.170,protocol:HTTP/2.0,resource:daemonsets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy/status,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:daemon-set-controller,verb:PUT (14-Mar-2024 18:31:33.455) (total time: 516ms):
	Trace[27322001]: ["GuaranteedUpdate etcd3" audit-id:3a44bd88-72c0-4501-b4d7-81437002d596,key:/daemonsets/kube-system/kube-proxy,type:*apps.DaemonSet,resource:daemonsets.apps 515ms (18:31:33.456)
	Trace[27322001]:  ---"Txn call completed" 509ms (18:31:33.971)]
	Trace[27322001]: [516.725224ms] [516.725224ms] END
	E0314 18:32:40.396291       1 watch.go:287] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoderWithAllocator{writer:(*framer.lengthDelimitedFrameWriter)(0xc004666ca8), encoder:(*versioning.codec)(0xc002222280), memAllocator:(*runtime.Allocator)(0xc004666cc0)})
	
	
	==> kube-apiserver [782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5] <==
	I0314 18:29:59.229131       1 options.go:220] external host was not specified, using 192.168.39.170
	I0314 18:29:59.230543       1 server.go:148] Version: v1.28.4
	I0314 18:29:59.230609       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:29:59.822983       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0314 18:29:59.835789       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0314 18:29:59.835914       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0314 18:29:59.836208       1 instance.go:298] Using reconciler: lease
	W0314 18:30:19.820801       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0314 18:30:19.823745       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0314 18:30:19.837423       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0314 18:30:19.837443       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e] <==
	I0314 18:30:00.096891       1 serving.go:348] Generated self-signed cert in-memory
	I0314 18:30:00.761362       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 18:30:00.761449       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:30:00.763371       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 18:30:00.763568       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 18:30:00.764667       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 18:30:00.764857       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0314 18:30:20.844984       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.170:8443/healthz\": dial tcp 192.168.39.170:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f] <==
	I0314 18:32:28.923047       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="48.942312ms"
	I0314 18:32:28.992461       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="69.240675ms"
	I0314 18:32:29.055078       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6" fieldPath="" kind="ReplicaSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulDelete" message="Deleted pod: busybox-5b5d89c9d6-gvkgl"
	I0314 18:32:29.100884       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="108.041859ms"
	I0314 18:32:29.144053       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="43.000866ms"
	I0314 18:32:29.159655       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="15.465442ms"
	I0314 18:32:29.159926       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="51.383µs"
	I0314 18:32:29.212652       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="13.237257ms"
	I0314 18:32:29.213308       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="378.586µs"
	I0314 18:32:30.507238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="40.634266ms"
	I0314 18:32:30.507408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.288µs"
	I0314 18:32:44.012373       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-105786-m04"
	I0314 18:32:45.352183       1 event.go:307] "Event occurred" object="ha-105786-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-105786-m03 event: Removing Node ha-105786-m03 from Controller"
	E0314 18:32:55.360653       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:32:55.360820       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:32:55.360851       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:32:55.360877       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:32:55.360908       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:32:55.360932       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362121       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362143       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362155       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362161       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362167       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362173       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	
	
	==> kube-proxy [50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9] <==
	I0314 18:19:02.664250       1 node.go:141] Successfully retrieved node IP: 192.168.39.170
	I0314 18:19:02.714117       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:19:02.714162       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:19:02.716766       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:19:02.717745       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:19:02.718063       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:19:02.718100       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:19:02.720154       1 config.go:188] "Starting service config controller"
	I0314 18:19:02.720408       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:19:02.720465       1 config.go:315] "Starting node config controller"
	I0314 18:19:02.720491       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:19:02.721511       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:19:02.721558       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:19:02.820597       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:19:02.820664       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:19:02.822840       1 shared_informer.go:318] Caches are synced for endpoint slice config
	E0314 18:28:13.122344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	E0314 18:28:13.122338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	E0314 18:28:13.122518       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	W0314 18:28:15.103846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Unauthorized
	E0314 18:28:15.103895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	W0314 18:28:15.103973       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Unauthorized
	E0314 18:28:15.103981       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	W0314 18:28:15.107552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Unauthorized
	E0314 18:28:15.107669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	
	
	==> kube-proxy [dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c] <==
	I0314 18:30:00.646660       1 server_others.go:69] "Using iptables proxy"
	E0314 18:30:03.724306       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:06.798355       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:09.869081       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:16.012282       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:28.301082       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	I0314 18:30:44.907767       1 node.go:141] Successfully retrieved node IP: 192.168.39.170
	I0314 18:30:44.959317       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:30:44.959379       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:30:44.963966       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:30:44.964077       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:30:44.964340       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:30:44.964376       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:30:44.966061       1 config.go:188] "Starting service config controller"
	I0314 18:30:44.966133       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:30:44.966161       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:30:44.966165       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:30:44.966945       1 config.go:315] "Starting node config controller"
	I0314 18:30:44.966981       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:30:45.066672       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 18:30:45.066839       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:30:45.067090       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2] <==
	W0314 18:28:07.898881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 18:28:07.898938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 18:28:07.926438       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 18:28:07.926552       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 18:28:08.127945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 18:28:08.128049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 18:28:08.703213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 18:28:08.703272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:28:08.811171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 18:28:08.811297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 18:28:09.018063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 18:28:09.018138       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 18:28:09.165241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 18:28:09.165338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 18:28:09.363080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 18:28:09.363186       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 18:28:10.170615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 18:28:10.170677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 18:28:10.170950       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 18:28:10.171006       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 18:28:10.215817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 18:28:10.215918       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 18:28:14.418627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 18:28:14.418766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 18:28:15.079433       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32] <==
	W0314 18:30:38.717457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.170:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:38.717554       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.170:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:38.957547       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.170:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:38.957635       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.170:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:39.067393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.170:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:39.067456       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.170:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:39.432829       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.170:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:39.432878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.170:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:41.548905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 18:30:41.548991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 18:30:41.549072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 18:30:41.549103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:30:41.549174       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 18:30:41.549221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 18:30:41.549408       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 18:30:41.549568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 18:30:41.549594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 18:30:41.549738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 18:30:41.559235       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 18:30:41.560822       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 18:30:52.555852       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0314 18:32:28.931399       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-sft2w\": pod busybox-5b5d89c9d6-sft2w is already assigned to node \"ha-105786-m04\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-sft2w" node="ha-105786-m04"
	E0314 18:32:28.931581       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 038d6fed-3cc9-42bc-97bd-27205fe77213(default/busybox-5b5d89c9d6-sft2w) wasn't assumed so cannot be forgotten"
	E0314 18:32:28.931676       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-sft2w\": pod busybox-5b5d89c9d6-sft2w is already assigned to node \"ha-105786-m04\"" pod="default/busybox-5b5d89c9d6-sft2w"
	I0314 18:32:28.931820       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-sft2w" node="ha-105786-m04"
	
	
	==> kubelet <==
	Mar 14 18:31:44 ha-105786 kubelet[1439]: E0314 18:31:44.304998    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:31:50 ha-105786 kubelet[1439]: E0314 18:31:50.364412    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:31:50 ha-105786 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:31:50 ha-105786 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:31:50 ha-105786 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:31:50 ha-105786 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:31:56 ha-105786 kubelet[1439]: I0314 18:31:56.303920    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:31:56 ha-105786 kubelet[1439]: E0314 18:31:56.304283    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:32:07 ha-105786 kubelet[1439]: I0314 18:32:07.303934    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:32:07 ha-105786 kubelet[1439]: E0314 18:32:07.304975    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:32:18 ha-105786 kubelet[1439]: I0314 18:32:18.304386    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:32:18 ha-105786 kubelet[1439]: E0314 18:32:18.306308    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:32:31 ha-105786 kubelet[1439]: I0314 18:32:31.304526    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:32:31 ha-105786 kubelet[1439]: E0314 18:32:31.305366    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:32:43 ha-105786 kubelet[1439]: I0314 18:32:43.303827    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:32:43 ha-105786 kubelet[1439]: E0314 18:32:43.304226    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:32:50 ha-105786 kubelet[1439]: E0314 18:32:50.361049    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:32:50 ha-105786 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:32:50 ha-105786 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:32:50 ha-105786 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:32:50 ha-105786 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:32:58 ha-105786 kubelet[1439]: I0314 18:32:58.304407    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:32:58 ha-105786 kubelet[1439]: E0314 18:32:58.304811    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:33:11 ha-105786 kubelet[1439]: I0314 18:33:11.304083    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:33:11 ha-105786 kubelet[1439]: E0314 18:33:11.304642    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 18:33:14.948294  967839 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18384-942544/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-105786 -n ha-105786
helpers_test.go:261: (dbg) Run:  kubectl --context ha-105786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/DeleteSecondaryNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/DeleteSecondaryNode (48.97s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StopCluster (142.02s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StopCluster
ha_test.go:531: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 stop -v=7 --alsologtostderr
E0314 18:33:37.904079  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
ha_test.go:531: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 stop -v=7 --alsologtostderr: exit status 82 (2m0.521819256s)

                                                
                                                
-- stdout --
	* Stopping node "ha-105786-m04"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:33:17.631867  967947 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:33:17.632167  967947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:33:17.632177  967947 out.go:304] Setting ErrFile to fd 2...
	I0314 18:33:17.632182  967947 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:33:17.632468  967947 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:33:17.632752  967947 out.go:298] Setting JSON to false
	I0314 18:33:17.632851  967947 mustload.go:65] Loading cluster: ha-105786
	I0314 18:33:17.633367  967947 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:33:17.633463  967947 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:33:17.633644  967947 mustload.go:65] Loading cluster: ha-105786
	I0314 18:33:17.633766  967947 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:33:17.633790  967947 stop.go:39] StopHost: ha-105786-m04
	I0314 18:33:17.634177  967947 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:33:17.634227  967947 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:33:17.650292  967947 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43219
	I0314 18:33:17.650752  967947 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:33:17.651437  967947 main.go:141] libmachine: Using API Version  1
	I0314 18:33:17.651465  967947 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:33:17.651835  967947 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:33:17.654327  967947 out.go:177] * Stopping node "ha-105786-m04"  ...
	I0314 18:33:17.655561  967947 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0314 18:33:17.655605  967947 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:33:17.655862  967947 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0314 18:33:17.655884  967947 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:33:17.659016  967947 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:33:17.659413  967947 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:33:17.659441  967947 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:33:17.659606  967947 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:33:17.659764  967947 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:33:17.659891  967947 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:33:17.660015  967947 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	I0314 18:33:17.750147  967947 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0314 18:33:17.806798  967947 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0314 18:33:17.863743  967947 main.go:141] libmachine: Stopping "ha-105786-m04"...
	I0314 18:33:17.863787  967947 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:33:17.865405  967947 main.go:141] libmachine: (ha-105786-m04) Calling .Stop
	I0314 18:33:17.868687  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 0/120
	I0314 18:33:18.870999  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 1/120
	I0314 18:33:19.872404  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 2/120
	I0314 18:33:20.873921  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 3/120
	I0314 18:33:21.875638  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 4/120
	I0314 18:33:22.877904  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 5/120
	I0314 18:33:23.879625  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 6/120
	I0314 18:33:24.881469  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 7/120
	I0314 18:33:25.882874  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 8/120
	I0314 18:33:26.884647  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 9/120
	I0314 18:33:27.886965  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 10/120
	I0314 18:33:28.888433  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 11/120
	I0314 18:33:29.890660  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 12/120
	I0314 18:33:30.892055  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 13/120
	I0314 18:33:31.893632  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 14/120
	I0314 18:33:32.895868  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 15/120
	I0314 18:33:33.897265  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 16/120
	I0314 18:33:34.899001  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 17/120
	I0314 18:33:35.900318  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 18/120
	I0314 18:33:36.901709  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 19/120
	I0314 18:33:37.904037  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 20/120
	I0314 18:33:38.905636  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 21/120
	I0314 18:33:39.907083  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 22/120
	I0314 18:33:40.908573  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 23/120
	I0314 18:33:41.911116  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 24/120
	I0314 18:33:42.912999  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 25/120
	I0314 18:33:43.914951  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 26/120
	I0314 18:33:44.916303  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 27/120
	I0314 18:33:45.917836  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 28/120
	I0314 18:33:46.919108  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 29/120
	I0314 18:33:47.920989  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 30/120
	I0314 18:33:48.922513  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 31/120
	I0314 18:33:49.924801  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 32/120
	I0314 18:33:50.927038  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 33/120
	I0314 18:33:51.928451  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 34/120
	I0314 18:33:52.930586  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 35/120
	I0314 18:33:53.931864  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 36/120
	I0314 18:33:54.933251  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 37/120
	I0314 18:33:55.934716  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 38/120
	I0314 18:33:56.936092  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 39/120
	I0314 18:33:57.938277  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 40/120
	I0314 18:33:58.939938  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 41/120
	I0314 18:33:59.941392  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 42/120
	I0314 18:34:00.942938  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 43/120
	I0314 18:34:01.944394  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 44/120
	I0314 18:34:02.946143  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 45/120
	I0314 18:34:03.947518  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 46/120
	I0314 18:34:04.948999  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 47/120
	I0314 18:34:05.950737  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 48/120
	I0314 18:34:06.952229  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 49/120
	I0314 18:34:07.954417  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 50/120
	I0314 18:34:08.956165  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 51/120
	I0314 18:34:09.958205  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 52/120
	I0314 18:34:10.959691  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 53/120
	I0314 18:34:11.961240  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 54/120
	I0314 18:34:12.963366  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 55/120
	I0314 18:34:13.964696  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 56/120
	I0314 18:34:14.965982  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 57/120
	I0314 18:34:15.967519  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 58/120
	I0314 18:34:16.969012  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 59/120
	I0314 18:34:17.971031  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 60/120
	I0314 18:34:18.972662  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 61/120
	I0314 18:34:19.974953  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 62/120
	I0314 18:34:20.976470  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 63/120
	I0314 18:34:21.979760  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 64/120
	I0314 18:34:22.981756  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 65/120
	I0314 18:34:23.983232  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 66/120
	I0314 18:34:24.985098  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 67/120
	I0314 18:34:25.986816  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 68/120
	I0314 18:34:26.988324  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 69/120
	I0314 18:34:27.990470  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 70/120
	I0314 18:34:28.991668  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 71/120
	I0314 18:34:29.993067  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 72/120
	I0314 18:34:30.994392  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 73/120
	I0314 18:34:31.995752  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 74/120
	I0314 18:34:32.997563  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 75/120
	I0314 18:34:33.998977  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 76/120
	I0314 18:34:35.000341  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 77/120
	I0314 18:34:36.001854  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 78/120
	I0314 18:34:37.003374  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 79/120
	I0314 18:34:38.004917  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 80/120
	I0314 18:34:39.006251  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 81/120
	I0314 18:34:40.007784  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 82/120
	I0314 18:34:41.009394  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 83/120
	I0314 18:34:42.011183  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 84/120
	I0314 18:34:43.013088  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 85/120
	I0314 18:34:44.015289  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 86/120
	I0314 18:34:45.016547  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 87/120
	I0314 18:34:46.018654  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 88/120
	I0314 18:34:47.020428  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 89/120
	I0314 18:34:48.022538  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 90/120
	I0314 18:34:49.024373  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 91/120
	I0314 18:34:50.025774  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 92/120
	I0314 18:34:51.028203  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 93/120
	I0314 18:34:52.029562  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 94/120
	I0314 18:34:53.030850  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 95/120
	I0314 18:34:54.032233  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 96/120
	I0314 18:34:55.033678  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 97/120
	I0314 18:34:56.035171  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 98/120
	I0314 18:34:57.036588  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 99/120
	I0314 18:34:58.038690  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 100/120
	I0314 18:34:59.040549  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 101/120
	I0314 18:35:00.043158  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 102/120
	I0314 18:35:01.044807  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 103/120
	I0314 18:35:02.046771  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 104/120
	I0314 18:35:03.049026  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 105/120
	I0314 18:35:04.050832  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 106/120
	I0314 18:35:05.052590  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 107/120
	I0314 18:35:06.054888  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 108/120
	I0314 18:35:07.056225  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 109/120
	I0314 18:35:08.058154  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 110/120
	I0314 18:35:09.059579  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 111/120
	I0314 18:35:10.061635  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 112/120
	I0314 18:35:11.063038  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 113/120
	I0314 18:35:12.065165  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 114/120
	I0314 18:35:13.067056  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 115/120
	I0314 18:35:14.069210  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 116/120
	I0314 18:35:15.071009  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 117/120
	I0314 18:35:16.072672  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 118/120
	I0314 18:35:17.073956  967947 main.go:141] libmachine: (ha-105786-m04) Waiting for machine to stop 119/120
	I0314 18:35:18.074622  967947 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0314 18:35:18.074678  967947 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0314 18:35:18.076550  967947 out.go:177] 
	W0314 18:35:18.077953  967947 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0314 18:35:18.077983  967947 out.go:239] * 
	* 
	W0314 18:35:18.089136  967947 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 18:35:18.090687  967947 out.go:177] 

                                                
                                                
** /stderr **
ha_test.go:533: failed to stop cluster. args "out/minikube-linux-amd64 -p ha-105786 stop -v=7 --alsologtostderr": exit status 82
ha_test.go:537: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
ha_test.go:537: (dbg) Non-zero exit: out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr: exit status 3 (18.925022359s)

                                                
                                                
-- stdout --
	ha-105786
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m02
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	ha-105786-m04
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:35:18.157874  968244 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:35:18.158120  968244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:35:18.158129  968244 out.go:304] Setting ErrFile to fd 2...
	I0314 18:35:18.158133  968244 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:35:18.158293  968244 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:35:18.158471  968244 out.go:298] Setting JSON to false
	I0314 18:35:18.158502  968244 mustload.go:65] Loading cluster: ha-105786
	I0314 18:35:18.158619  968244 notify.go:220] Checking for updates...
	I0314 18:35:18.158868  968244 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:35:18.158891  968244 status.go:255] checking status of ha-105786 ...
	I0314 18:35:18.159276  968244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:35:18.159332  968244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:35:18.176884  968244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45523
	I0314 18:35:18.177335  968244 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:35:18.177948  968244 main.go:141] libmachine: Using API Version  1
	I0314 18:35:18.177972  968244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:35:18.178358  968244 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:35:18.178565  968244 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:35:18.180404  968244 status.go:330] ha-105786 host status = "Running" (err=<nil>)
	I0314 18:35:18.180421  968244 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:35:18.180701  968244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:35:18.180736  968244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:35:18.196013  968244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36647
	I0314 18:35:18.196413  968244 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:35:18.196861  968244 main.go:141] libmachine: Using API Version  1
	I0314 18:35:18.196884  968244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:35:18.197204  968244 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:35:18.197396  968244 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:35:18.199926  968244 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:35:18.200402  968244 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:35:18.200432  968244 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:35:18.200612  968244 host.go:66] Checking if "ha-105786" exists ...
	I0314 18:35:18.201023  968244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:35:18.201062  968244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:35:18.215310  968244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36395
	I0314 18:35:18.215758  968244 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:35:18.216201  968244 main.go:141] libmachine: Using API Version  1
	I0314 18:35:18.216237  968244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:35:18.216547  968244 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:35:18.216728  968244 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:35:18.216967  968244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:35:18.216989  968244 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:35:18.219472  968244 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:35:18.219943  968244 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:35:18.219968  968244 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:35:18.220122  968244 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:35:18.220336  968244 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:35:18.220506  968244 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:35:18.220648  968244 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:35:18.307070  968244 ssh_runner.go:195] Run: systemctl --version
	I0314 18:35:18.315923  968244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:35:18.336032  968244 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:35:18.336058  968244 api_server.go:166] Checking apiserver status ...
	I0314 18:35:18.336090  968244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:35:18.356551  968244 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/5437/cgroup
	W0314 18:35:18.367016  968244 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/5437/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:35:18.367086  968244 ssh_runner.go:195] Run: ls
	I0314 18:35:18.372973  968244 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:35:18.377944  968244 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:35:18.377964  968244 status.go:422] ha-105786 apiserver status = Running (err=<nil>)
	I0314 18:35:18.377973  968244 status.go:257] ha-105786 status: &{Name:ha-105786 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:35:18.377997  968244 status.go:255] checking status of ha-105786-m02 ...
	I0314 18:35:18.378291  968244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:35:18.378324  968244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:35:18.393595  968244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43409
	I0314 18:35:18.394044  968244 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:35:18.394479  968244 main.go:141] libmachine: Using API Version  1
	I0314 18:35:18.394503  968244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:35:18.394842  968244 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:35:18.395033  968244 main.go:141] libmachine: (ha-105786-m02) Calling .GetState
	I0314 18:35:18.396529  968244 status.go:330] ha-105786-m02 host status = "Running" (err=<nil>)
	I0314 18:35:18.396547  968244 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:35:18.396866  968244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:35:18.396910  968244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:35:18.411456  968244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43903
	I0314 18:35:18.411883  968244 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:35:18.412366  968244 main.go:141] libmachine: Using API Version  1
	I0314 18:35:18.412392  968244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:35:18.412740  968244 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:35:18.412941  968244 main.go:141] libmachine: (ha-105786-m02) Calling .GetIP
	I0314 18:35:18.415290  968244 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:35:18.415762  968244 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:30:04 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:35:18.415797  968244 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:35:18.415917  968244 host.go:66] Checking if "ha-105786-m02" exists ...
	I0314 18:35:18.416324  968244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:35:18.416361  968244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:35:18.432307  968244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39295
	I0314 18:35:18.432763  968244 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:35:18.433240  968244 main.go:141] libmachine: Using API Version  1
	I0314 18:35:18.433253  968244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:35:18.433583  968244 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:35:18.433764  968244 main.go:141] libmachine: (ha-105786-m02) Calling .DriverName
	I0314 18:35:18.433969  968244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:35:18.433992  968244 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHHostname
	I0314 18:35:18.436308  968244 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:35:18.436653  968244 main.go:141] libmachine: (ha-105786-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:c9:c4:3c", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:30:04 +0000 UTC Type:0 Mac:52:54:00:c9:c4:3c Iaid: IPaddr:192.168.39.245 Prefix:24 Hostname:ha-105786-m02 Clientid:01:52:54:00:c9:c4:3c}
	I0314 18:35:18.436671  968244 main.go:141] libmachine: (ha-105786-m02) DBG | domain ha-105786-m02 has defined IP address 192.168.39.245 and MAC address 52:54:00:c9:c4:3c in network mk-ha-105786
	I0314 18:35:18.436833  968244 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHPort
	I0314 18:35:18.437021  968244 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHKeyPath
	I0314 18:35:18.437183  968244 main.go:141] libmachine: (ha-105786-m02) Calling .GetSSHUsername
	I0314 18:35:18.437353  968244 sshutil.go:53] new ssh client: &{IP:192.168.39.245 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m02/id_rsa Username:docker}
	I0314 18:35:18.532715  968244 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:35:18.553571  968244 kubeconfig.go:125] found "ha-105786" server: "https://192.168.39.254:8443"
	I0314 18:35:18.553602  968244 api_server.go:166] Checking apiserver status ...
	I0314 18:35:18.553644  968244 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:35:18.571347  968244 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1523/cgroup
	W0314 18:35:18.582801  968244 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1523/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:35:18.582851  968244 ssh_runner.go:195] Run: ls
	I0314 18:35:18.588396  968244 api_server.go:253] Checking apiserver healthz at https://192.168.39.254:8443/healthz ...
	I0314 18:35:18.593138  968244 api_server.go:279] https://192.168.39.254:8443/healthz returned 200:
	ok
	I0314 18:35:18.593158  968244 status.go:422] ha-105786-m02 apiserver status = Running (err=<nil>)
	I0314 18:35:18.593167  968244 status.go:257] ha-105786-m02 status: &{Name:ha-105786-m02 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:35:18.593182  968244 status.go:255] checking status of ha-105786-m04 ...
	I0314 18:35:18.593462  968244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:35:18.593500  968244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:35:18.609372  968244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44031
	I0314 18:35:18.609783  968244 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:35:18.610247  968244 main.go:141] libmachine: Using API Version  1
	I0314 18:35:18.610271  968244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:35:18.610692  968244 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:35:18.610908  968244 main.go:141] libmachine: (ha-105786-m04) Calling .GetState
	I0314 18:35:18.612660  968244 status.go:330] ha-105786-m04 host status = "Running" (err=<nil>)
	I0314 18:35:18.612734  968244 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:35:18.613081  968244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:35:18.613122  968244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:35:18.628066  968244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39597
	I0314 18:35:18.628515  968244 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:35:18.629026  968244 main.go:141] libmachine: Using API Version  1
	I0314 18:35:18.629055  968244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:35:18.629381  968244 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:35:18.629557  968244 main.go:141] libmachine: (ha-105786-m04) Calling .GetIP
	I0314 18:35:18.632433  968244 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:35:18.632911  968244 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:35:18.632952  968244 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:35:18.633083  968244 host.go:66] Checking if "ha-105786-m04" exists ...
	I0314 18:35:18.633366  968244 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:35:18.633398  968244 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:35:18.647423  968244 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36403
	I0314 18:35:18.647881  968244 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:35:18.648344  968244 main.go:141] libmachine: Using API Version  1
	I0314 18:35:18.648380  968244 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:35:18.648739  968244 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:35:18.648905  968244 main.go:141] libmachine: (ha-105786-m04) Calling .DriverName
	I0314 18:35:18.649063  968244 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:35:18.649079  968244 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHHostname
	I0314 18:35:18.651715  968244 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:35:18.652175  968244 main.go:141] libmachine: (ha-105786-m04) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:2c:c1:3e", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:21:58 +0000 UTC Type:0 Mac:52:54:00:2c:c1:3e Iaid: IPaddr:192.168.39.73 Prefix:24 Hostname:ha-105786-m04 Clientid:01:52:54:00:2c:c1:3e}
	I0314 18:35:18.652203  968244 main.go:141] libmachine: (ha-105786-m04) DBG | domain ha-105786-m04 has defined IP address 192.168.39.73 and MAC address 52:54:00:2c:c1:3e in network mk-ha-105786
	I0314 18:35:18.652419  968244 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHPort
	I0314 18:35:18.652610  968244 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHKeyPath
	I0314 18:35:18.652794  968244 main.go:141] libmachine: (ha-105786-m04) Calling .GetSSHUsername
	I0314 18:35:18.652955  968244 sshutil.go:53] new ssh client: &{IP:192.168.39.73 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786-m04/id_rsa Username:docker}
	W0314 18:35:37.016432  968244 sshutil.go:64] dial failure (will retry): dial tcp 192.168.39.73:22: connect: no route to host
	W0314 18:35:37.016546  968244 start.go:268] error running df -h /var: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host
	E0314 18:35:37.016566  968244 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host
	I0314 18:35:37.016574  968244 status.go:257] ha-105786-m04 status: &{Name:ha-105786-m04 Host:Error Kubelet:Nonexistent APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	E0314 18:35:37.016618  968244 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.73:22: connect: no route to host

                                                
                                                
** /stderr **
ha_test.go:540: failed to run minikube status. args "out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p ha-105786 -n ha-105786
helpers_test.go:244: <<< TestMutliControlPlane/serial/StopCluster FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMutliControlPlane/serial/StopCluster]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p ha-105786 logs -n 25: (1.882503327s)
helpers_test.go:252: TestMutliControlPlane/serial/StopCluster logs: 
-- stdout --
	
	==> Audit <==
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| Command |                                       Args                                       |  Profile  |  User   | Version |     Start Time      |      End Time       |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	| ssh     | ha-105786 ssh -n ha-105786-m02 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04:/home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m04 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp testdata/cp-test.txt                                                | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04:/home/docker/cp-test.txt                                           |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /tmp/TestMutliControlPlaneserialCopyFile3116594682/001/cp-test_ha-105786-m04.txt |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786:/home/docker/cp-test_ha-105786-m04_ha-105786.txt                       |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786 sudo cat                                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786.txt                                 |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m02:/home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m02 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt                             |           |         |         |                     |                     |
	| cp      | ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m03:/home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt               |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n                                                                 | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | ha-105786-m04 sudo cat                                                           |           |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                         |           |         |         |                     |                     |
	| ssh     | ha-105786 ssh -n ha-105786-m03 sudo cat                                          | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC | 14 Mar 24 18:22 UTC |
	|         | /home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt                             |           |         |         |                     |                     |
	| node    | ha-105786 node stop m02 -v=7                                                     | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:22 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | ha-105786 node start m02 -v=7                                                    | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:25 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-105786 -v=7                                                           | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | -p ha-105786 -v=7                                                                | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:26 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| start   | -p ha-105786 --wait=true -v=7                                                    | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:28 UTC | 14 Mar 24 18:32 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| node    | list -p ha-105786                                                                | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC |                     |
	| node    | ha-105786 node delete m03 -v=7                                                   | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:32 UTC | 14 Mar 24 18:32 UTC |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	| stop    | ha-105786 stop -v=7                                                              | ha-105786 | jenkins | v1.32.0 | 14 Mar 24 18:33 UTC |                     |
	|         | --alsologtostderr                                                                |           |         |         |                     |                     |
	|---------|----------------------------------------------------------------------------------|-----------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:28:14
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:28:14.135581  966381 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:28:14.135809  966381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:28:14.135819  966381 out.go:304] Setting ErrFile to fd 2...
	I0314 18:28:14.135823  966381 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:28:14.136017  966381 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:28:14.136635  966381 out.go:298] Setting JSON to false
	I0314 18:28:14.137555  966381 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":94246,"bootTime":1710346648,"procs":185,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:28:14.137623  966381 start.go:139] virtualization: kvm guest
	I0314 18:28:14.140151  966381 out.go:177] * [ha-105786] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:28:14.141865  966381 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:28:14.143139  966381 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:28:14.141950  966381 notify.go:220] Checking for updates...
	I0314 18:28:14.145606  966381 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:28:14.146880  966381 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:28:14.148172  966381 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:28:14.149433  966381 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:28:14.151456  966381 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:28:14.151545  966381 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:28:14.151938  966381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:28:14.152008  966381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:28:14.167584  966381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33923
	I0314 18:28:14.167994  966381 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:28:14.168545  966381 main.go:141] libmachine: Using API Version  1
	I0314 18:28:14.168568  966381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:28:14.168939  966381 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:28:14.169128  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:28:14.204576  966381 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 18:28:14.205886  966381 start.go:297] selected driver: kvm2
	I0314 18:28:14.205904  966381 start.go:901] validating driver "kvm2" against &{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVer
sion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false ef
k:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountG
ID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:28:14.206073  966381 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:28:14.206444  966381 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:28:14.206518  966381 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 18:28:14.221472  966381 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 18:28:14.222163  966381 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:28:14.222234  966381 cni.go:84] Creating CNI manager for ""
	I0314 18:28:14.222247  966381 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0314 18:28:14.222320  966381 start.go:340] cluster config:
	{Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39
.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-til
ler:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType
:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:28:14.222495  966381 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:28:14.224385  966381 out.go:177] * Starting "ha-105786" primary control-plane node in "ha-105786" cluster
	I0314 18:28:14.225804  966381 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:28:14.225844  966381 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 18:28:14.225865  966381 cache.go:56] Caching tarball of preloaded images
	I0314 18:28:14.225953  966381 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:28:14.225968  966381 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:28:14.226092  966381 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/config.json ...
	I0314 18:28:14.226284  966381 start.go:360] acquireMachinesLock for ha-105786: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:28:14.226328  966381 start.go:364] duration metric: took 24.57µs to acquireMachinesLock for "ha-105786"
	I0314 18:28:14.226352  966381 start.go:96] Skipping create...Using existing machine configuration
	I0314 18:28:14.226360  966381 fix.go:54] fixHost starting: 
	I0314 18:28:14.226657  966381 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:28:14.226701  966381 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:28:14.241136  966381 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38977
	I0314 18:28:14.241530  966381 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:28:14.241978  966381 main.go:141] libmachine: Using API Version  1
	I0314 18:28:14.242009  966381 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:28:14.242325  966381 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:28:14.242572  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:28:14.242703  966381 main.go:141] libmachine: (ha-105786) Calling .GetState
	I0314 18:28:14.244394  966381 fix.go:112] recreateIfNeeded on ha-105786: state=Running err=<nil>
	W0314 18:28:14.244429  966381 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 18:28:14.246194  966381 out.go:177] * Updating the running kvm2 "ha-105786" VM ...
	I0314 18:28:14.247325  966381 machine.go:94] provisionDockerMachine start ...
	I0314 18:28:14.247347  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:28:14.247553  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.250303  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.250837  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.250860  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.251033  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.251208  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.251368  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.251526  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.251694  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.251915  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.251931  966381 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:28:14.358280  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786
	
	I0314 18:28:14.358312  966381 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:28:14.358557  966381 buildroot.go:166] provisioning hostname "ha-105786"
	I0314 18:28:14.358582  966381 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:28:14.358772  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.361558  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.362012  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.362038  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.362163  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.362358  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.362546  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.362671  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.362833  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.363062  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.363078  966381 main.go:141] libmachine: About to run SSH command:
	sudo hostname ha-105786 && echo "ha-105786" | sudo tee /etc/hostname
	I0314 18:28:14.484069  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: ha-105786
	
	I0314 18:28:14.484119  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.487433  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.487941  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.487983  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.488155  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.488354  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.488522  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.488656  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.488852  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.489074  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.489104  966381 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sha-105786' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 ha-105786/g' /etc/hosts;
				else 
					echo '127.0.1.1 ha-105786' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:28:14.594587  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:28:14.594624  966381 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:28:14.594655  966381 buildroot.go:174] setting up certificates
	I0314 18:28:14.594666  966381 provision.go:84] configureAuth start
	I0314 18:28:14.594676  966381 main.go:141] libmachine: (ha-105786) Calling .GetMachineName
	I0314 18:28:14.594943  966381 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:28:14.597572  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.598001  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.598034  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.598168  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.600676  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.601049  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.601074  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.601190  966381 provision.go:143] copyHostCerts
	I0314 18:28:14.601228  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:28:14.601277  966381 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 18:28:14.601287  966381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:28:14.601369  966381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:28:14.601478  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:28:14.601510  966381 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 18:28:14.601520  966381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:28:14.601557  966381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:28:14.601636  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:28:14.601666  966381 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 18:28:14.601679  966381 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:28:14.601714  966381 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:28:14.601795  966381 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.ha-105786 san=[127.0.0.1 192.168.39.170 ha-105786 localhost minikube]
	I0314 18:28:14.793407  966381 provision.go:177] copyRemoteCerts
	I0314 18:28:14.793506  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:28:14.793541  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.796538  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.796966  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.796996  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.797178  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.797386  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.797602  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.797798  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:28:14.880507  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 18:28:14.880595  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:28:14.909434  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 18:28:14.909498  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1200 bytes)
	I0314 18:28:14.938304  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 18:28:14.938363  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 18:28:14.967874  966381 provision.go:87] duration metric: took 373.192835ms to configureAuth
	I0314 18:28:14.967907  966381 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:28:14.968172  966381 config.go:182] Loaded profile config "ha-105786": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:28:14.968305  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:28:14.970873  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.971322  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:28:14.971347  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:28:14.971576  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:28:14.971817  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.972020  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:28:14.972248  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:28:14.972453  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:28:14.972619  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:28:14.972633  966381 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:29:45.881390  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:29:45.881434  966381 machine.go:97] duration metric: took 1m31.634090272s to provisionDockerMachine
	I0314 18:29:45.881454  966381 start.go:293] postStartSetup for "ha-105786" (driver="kvm2")
	I0314 18:29:45.881471  966381 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:29:45.881500  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:45.881876  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:29:45.881911  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:45.885484  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:45.886080  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:45.886109  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:45.886288  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:45.886498  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:45.886677  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:45.886828  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:29:45.969077  966381 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:29:45.973787  966381 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:29:45.973816  966381 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:29:45.973920  966381 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:29:45.973999  966381 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 18:29:45.974015  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /etc/ssl/certs/9513112.pem
	I0314 18:29:45.974096  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:29:45.984935  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:29:46.011260  966381 start.go:296] duration metric: took 129.788525ms for postStartSetup
	I0314 18:29:46.011314  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.011643  966381 ssh_runner.go:195] Run: sudo ls --almost-all -1 /var/lib/minikube/backup
	I0314 18:29:46.011670  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.014882  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.015320  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.015348  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.015461  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.015649  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.015818  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.015958  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	W0314 18:29:46.095015  966381 fix.go:99] cannot read backup folder, skipping restore: read dir: sudo ls --almost-all -1 /var/lib/minikube/backup: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/var/lib/minikube/backup': No such file or directory
	I0314 18:29:46.095038  966381 fix.go:56] duration metric: took 1m31.868679135s for fixHost
	I0314 18:29:46.095062  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.097850  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.098240  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.098263  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.098402  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.098618  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.098810  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.098954  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.099135  966381 main.go:141] libmachine: Using SSH client type: native
	I0314 18:29:46.099304  966381 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.170 22 <nil> <nil>}
	I0314 18:29:46.099313  966381 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:29:46.201375  966381 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710440986.158036020
	
	I0314 18:29:46.201405  966381 fix.go:216] guest clock: 1710440986.158036020
	I0314 18:29:46.201414  966381 fix.go:229] Guest: 2024-03-14 18:29:46.15803602 +0000 UTC Remote: 2024-03-14 18:29:46.095045686 +0000 UTC m=+92.011661084 (delta=62.990334ms)
	I0314 18:29:46.201440  966381 fix.go:200] guest clock delta is within tolerance: 62.990334ms
	I0314 18:29:46.201447  966381 start.go:83] releasing machines lock for "ha-105786", held for 1m31.975109616s
	I0314 18:29:46.201474  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.201805  966381 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:29:46.204592  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.205008  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.205040  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.205187  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.205819  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.205986  966381 main.go:141] libmachine: (ha-105786) Calling .DriverName
	I0314 18:29:46.206081  966381 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:29:46.206122  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.206196  966381 ssh_runner.go:195] Run: cat /version.json
	I0314 18:29:46.206230  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHHostname
	I0314 18:29:46.209016  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.209269  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.209428  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.209454  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.209632  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.209841  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.209848  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:46.209879  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:46.210105  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHPort
	I0314 18:29:46.210118  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.210303  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:29:46.210337  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHKeyPath
	I0314 18:29:46.210512  966381 main.go:141] libmachine: (ha-105786) Calling .GetSSHUsername
	I0314 18:29:46.210654  966381 sshutil.go:53] new ssh client: &{IP:192.168.39.170 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/ha-105786/id_rsa Username:docker}
	I0314 18:29:46.286133  966381 ssh_runner.go:195] Run: systemctl --version
	I0314 18:29:46.314007  966381 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:29:46.480684  966381 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 18:29:46.491962  966381 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:29:46.492020  966381 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:29:46.502226  966381 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 18:29:46.502257  966381 start.go:494] detecting cgroup driver to use...
	I0314 18:29:46.502322  966381 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:29:46.518806  966381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:29:46.533532  966381 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:29:46.533603  966381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:29:46.547594  966381 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:29:46.562640  966381 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:29:46.748094  966381 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:29:46.913423  966381 docker.go:233] disabling docker service ...
	I0314 18:29:46.913498  966381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:29:46.935968  966381 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:29:46.951541  966381 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:29:47.101123  966381 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:29:47.251805  966381 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:29:47.268510  966381 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:29:47.289410  966381 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:29:47.289473  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.300842  966381 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:29:47.300915  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.311901  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.322706  966381 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:29:47.333403  966381 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:29:47.344317  966381 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:29:47.353903  966381 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:29:47.363435  966381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:29:47.507378  966381 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:29:50.656435  966381 ssh_runner.go:235] Completed: sudo systemctl restart crio: (3.149013693s)
	I0314 18:29:50.656479  966381 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:29:50.656556  966381 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:29:50.663052  966381 start.go:562] Will wait 60s for crictl version
	I0314 18:29:50.663110  966381 ssh_runner.go:195] Run: which crictl
	I0314 18:29:50.667345  966381 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:29:50.709178  966381 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:29:50.709266  966381 ssh_runner.go:195] Run: crio --version
	I0314 18:29:50.740672  966381 ssh_runner.go:195] Run: crio --version
	I0314 18:29:50.776822  966381 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:29:50.778198  966381 main.go:141] libmachine: (ha-105786) Calling .GetIP
	I0314 18:29:50.781287  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:50.781657  966381 main.go:141] libmachine: (ha-105786) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:87:0a:bd", ip: ""} in network mk-ha-105786: {Iface:virbr1 ExpiryTime:2024-03-14 19:18:18 +0000 UTC Type:0 Mac:52:54:00:87:0a:bd Iaid: IPaddr:192.168.39.170 Prefix:24 Hostname:ha-105786 Clientid:01:52:54:00:87:0a:bd}
	I0314 18:29:50.781682  966381 main.go:141] libmachine: (ha-105786) DBG | domain ha-105786 has defined IP address 192.168.39.170 and MAC address 52:54:00:87:0a:bd in network mk-ha-105786
	I0314 18:29:50.781874  966381 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:29:50.787094  966381 kubeadm.go:877] updating cluster {Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Cl
usterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false fre
shpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker M
ountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:29:50.787242  966381 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:29:50.787286  966381 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:29:50.836424  966381 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:29:50.836452  966381 crio.go:415] Images already preloaded, skipping extraction
	I0314 18:29:50.836534  966381 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:29:50.883409  966381 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:29:50.883436  966381 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:29:50.883446  966381 kubeadm.go:928] updating node { 192.168.39.170 8443 v1.28.4 crio true true} ...
	I0314 18:29:50.883558  966381 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=ha-105786 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.170
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:29:50.883631  966381 ssh_runner.go:195] Run: crio config
	I0314 18:29:50.940266  966381 cni.go:84] Creating CNI manager for ""
	I0314 18:29:50.940294  966381 cni.go:136] multinode detected (4 nodes found), recommending kindnet
	I0314 18:29:50.940311  966381 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:29:50.940343  966381 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.170 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:ha-105786 NodeName:ha-105786 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.170"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.170 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/etc/kubernete
s/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:29:50.940523  966381 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.170
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "ha-105786"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.170
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.170"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:29:50.940545  966381 kube-vip.go:105] generating kube-vip config ...
	I0314 18:29:50.940622  966381 kube-vip.go:125] kube-vip config:
	apiVersion: v1
	kind: Pod
	metadata:
	  creationTimestamp: null
	  name: kube-vip
	  namespace: kube-system
	spec:
	  containers:
	  - args:
	    - manager
	    env:
	    - name: vip_arp
	      value: "true"
	    - name: port
	      value: "8443"
	    - name: vip_interface
	      value: eth0
	    - name: vip_cidr
	      value: "32"
	    - name: dns_mode
	      value: first
	    - name: cp_enable
	      value: "true"
	    - name: cp_namespace
	      value: kube-system
	    - name: vip_leaderelection
	      value: "true"
	    - name: vip_leasename
	      value: plndr-cp-lock
	    - name: vip_leaseduration
	      value: "5"
	    - name: vip_renewdeadline
	      value: "3"
	    - name: vip_retryperiod
	      value: "1"
	    - name: address
	      value: 192.168.39.254
	    - name: prometheus_server
	      value: :2112
	    - name : lb_enable
	      value: "true"
	    - name: lb_port
	      value: "8443"
	    image: ghcr.io/kube-vip/kube-vip:v0.7.1
	    imagePullPolicy: IfNotPresent
	    name: kube-vip
	    resources: {}
	    securityContext:
	      capabilities:
	        add:
	        - NET_ADMIN
	        - NET_RAW
	    volumeMounts:
	    - mountPath: /etc/kubernetes/admin.conf
	      name: kubeconfig
	  hostAliases:
	  - hostnames:
	    - kubernetes
	    ip: 127.0.0.1
	  hostNetwork: true
	  volumes:
	  - hostPath:
	      path: "/etc/kubernetes/admin.conf"
	    name: kubeconfig
	status: {}
	I0314 18:29:50.940691  966381 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:29:50.952135  966381 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:29:50.952199  966381 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube /etc/kubernetes/manifests
	I0314 18:29:50.962147  966381 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (309 bytes)
	I0314 18:29:50.980512  966381 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:29:50.999194  966381 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2153 bytes)
	I0314 18:29:51.017990  966381 ssh_runner.go:362] scp memory --> /etc/kubernetes/manifests/kube-vip.yaml (1346 bytes)
	I0314 18:29:51.037979  966381 ssh_runner.go:195] Run: grep 192.168.39.254	control-plane.minikube.internal$ /etc/hosts
	I0314 18:29:51.042188  966381 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:29:51.223538  966381 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:29:51.259248  966381 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786 for IP: 192.168.39.170
	I0314 18:29:51.259276  966381 certs.go:194] generating shared ca certs ...
	I0314 18:29:51.259299  966381 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:29:51.259518  966381 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:29:51.259558  966381 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:29:51.259574  966381 certs.go:256] generating profile certs ...
	I0314 18:29:51.259649  966381 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/client.key
	I0314 18:29:51.259676  966381 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054
	I0314 18:29:51.259692  966381 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.39.170 192.168.39.245 192.168.39.190 192.168.39.254]
	I0314 18:29:51.368068  966381 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054 ...
	I0314 18:29:51.368106  966381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054: {Name:mk104c6891f4c562b4c5c1e2fd4fbf7ab8a19f7b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:29:51.368317  966381 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054 ...
	I0314 18:29:51.368336  966381 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054: {Name:mk5452416f251a959745d5afed1f6504eb414193 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:29:51.368435  966381 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt.cb243054 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt
	I0314 18:29:51.368581  966381 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key.cb243054 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key
	I0314 18:29:51.368708  966381 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key
	I0314 18:29:51.368724  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:29:51.368742  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:29:51.368755  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:29:51.368765  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:29:51.368785  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:29:51.368797  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:29:51.368818  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:29:51.368830  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:29:51.368880  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 18:29:51.368907  966381 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 18:29:51.368918  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:29:51.368936  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:29:51.368955  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:29:51.368982  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:29:51.369020  966381 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:29:51.369046  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.369061  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem -> /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.369070  966381 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.369722  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:29:51.397901  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:29:51.424922  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:29:51.451188  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:29:51.476429  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 18:29:51.502245  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1675 bytes)
	I0314 18:29:51.528392  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:29:51.556131  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/ha-105786/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 18:29:51.582461  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:29:51.616725  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 18:29:51.644959  966381 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 18:29:51.672417  966381 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:29:51.691534  966381 ssh_runner.go:195] Run: openssl version
	I0314 18:29:51.699328  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:29:51.711530  966381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.716524  966381 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.716584  966381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:29:51.722737  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:29:51.734143  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 18:29:51.746609  966381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.751712  966381 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.751761  966381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 18:29:51.758096  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 18:29:51.769099  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 18:29:51.781745  966381 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.786840  966381 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.786914  966381 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 18:29:51.793202  966381 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:29:51.804099  966381 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:29:51.809375  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 18:29:51.815416  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 18:29:51.821550  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 18:29:51.827643  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 18:29:51.833646  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 18:29:51.839821  966381 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 18:29:51.845930  966381 kubeadm.go:391] StartCluster: {Name:ha-105786 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 Clust
erName:ha-105786 Namespace:default APIServerHAVIP:192.168.39.254 APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.170 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.245 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m03 IP:192.168.39.190 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m04 IP:192.168.39.73 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshp
od:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker Moun
tIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:29:51.846070  966381 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 18:29:51.846123  966381 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 18:29:51.887584  966381 cri.go:89] found id: "f7e5ce54c582966673026ea5da16491d518cf3c3aaeee9a7ceb0f7b5a1372c5f"
	I0314 18:29:51.887615  966381 cri.go:89] found id: "2f4ac879a201f1cf19940c5c6d0a391e018b4fb1e3238c5e18f108de7dfe9d49"
	I0314 18:29:51.887619  966381 cri.go:89] found id: "c1c9118d7d57dfe1750090185c762b93b050f71cdc3e71a82977bc49623be966"
	I0314 18:29:51.887621  966381 cri.go:89] found id: "ce25ae74cf40b4fa581afd2062a90e823cb3fa2657388e63ce08248941a6fad4"
	I0314 18:29:51.887624  966381 cri.go:89] found id: "ff5ec432d711f7bf3857c40fae7b1a0cfffa8ec3ef444da90739120c94fc3675"
	I0314 18:29:51.887627  966381 cri.go:89] found id: "b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b"
	I0314 18:29:51.887630  966381 cri.go:89] found id: "4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817"
	I0314 18:29:51.887632  966381 cri.go:89] found id: "fa5c51367cb910d29fc6089ed7278c3a91f89058d81869ad96be85710c59dd4d"
	I0314 18:29:51.887634  966381 cri.go:89] found id: "50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9"
	I0314 18:29:51.887640  966381 cri.go:89] found id: "3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2"
	I0314 18:29:51.887642  966381 cri.go:89] found id: "dd5f374c12463b2840fca3d3dd2c581be39ddc2cf73baf75a6e64c5ed2060183"
	I0314 18:29:51.887644  966381 cri.go:89] found id: "ee804d488d0b1f8ae4bdeb91b74807e1897408f55fa27f9f4d9ef28c99f4a922"
	I0314 18:29:51.887646  966381 cri.go:89] found id: "ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc"
	I0314 18:29:51.887650  966381 cri.go:89] found id: ""
	I0314 18:29:51.887692  966381 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.762246449Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8227c8ee-2a95-4517-9348-9d15318a9215 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.764496097Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9580c75f-793b-4f0c-9d68-ac6c337ea799 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.765097765Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710441337765068323,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9580c75f-793b-4f0c-9d68-ac6c337ea799 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.766197439Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7fb8da4a-9f52-44f5-9284-6889c1cb2fa1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.766378015Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7fb8da4a-9f52-44f5-9284-6889c1cb2fa1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.766920783Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710440999492513300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710440998498516832,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c
889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942,PodSandboxId:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440997796066717,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710440998481172031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710440998251923020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf
6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kuber
netes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710440496387804188,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubern
etes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346674843863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346689801526,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710440342438475307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710440320771301059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710440320556512066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7fb8da4a-9f52-44f5-9284-6889c1cb2fa1 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.815380717Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=5bbd6323-b3ff-45af-9920-7660eee8c016 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.815521799Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=5bbd6323-b3ff-45af-9920-7660eee8c016 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.816855586Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d6a9ee7a-9cbe-4919-a709-af6286ca9e7d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.817321331Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710441337817298303,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d6a9ee7a-9cbe-4919-a709-af6286ca9e7d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.818058844Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3756e996-a187-4a5d-8505-f060a3556208 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.818275633Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3756e996-a187-4a5d-8505-f060a3556208 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.818672129Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710440999492513300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710440998498516832,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c
889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942,PodSandboxId:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440997796066717,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710440998481172031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710440998251923020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf
6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kuber
netes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710440496387804188,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubern
etes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346674843863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346689801526,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710440342438475307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710440320771301059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710440320556512066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3756e996-a187-4a5d-8505-f060a3556208 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.863362300Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=8e720f32-9ad5-4ed1-859b-c339da145316 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.863788629Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=8e720f32-9ad5-4ed1-859b-c339da145316 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.865354374Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=c1f3101b-22fb-4f8c-b731-c1882750bdb1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.865867262Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710441337865843437,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:146620,},InodesUsed:&UInt64Value{Value:72,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=c1f3101b-22fb-4f8c-b731-c1882750bdb1 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.866806470Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e03bd495-402d-4f08-80fc-e7fcded6c54b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.866889094Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e03bd495-402d-4f08-80fc-e7fcded6c54b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.867323367Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710440999492513300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710440998498516832,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c
889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942,PodSandboxId:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440997796066717,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710440998481172031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710440998251923020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf
6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kuber
netes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710440496387804188,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubern
etes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346674843863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346689801526,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710440342438475307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710440320771301059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710440320556512066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e03bd495-402d-4f08-80fc-e7fcded6c54b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.888904320Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=aa24c297-8cd7-4061-90e9-ca7a7a50e4a6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.891131749Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-4h99c,Uid:6f1d3430-1aec-4155-8b75-951d851d54ae,Namespace:default,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710441031499520920,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:21:34.785609145Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-cx8rc,Uid:d2e960de-67a9-4385-ba02-78a744602bcc,Namespace:kube-system,Attempt:1,},Sta
te:SANDBOX_READY,CreatedAt:1710440997873572735,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:19:05.130122921Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jsddl,Uid:bdbdea16-97b0-4581-8bab-9a472af11004,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997825478815,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.see
n: 2024-03-14T18:19:05.130015748Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-ha-105786,Uid:dc5e46764078ce514b56622c3d7888bf,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997804507421,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: dc5e46764078ce514b56622c3d7888bf,kubernetes.io/config.seen: 2024-03-14T18:18:50.238637371Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:566fc43f-5610-4dcd-b683-1cc87e6ed609,Namespace:kube-syste
m,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997780144979,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath
\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-14T18:19:05.127453312Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&PodSandboxMetadata{Name:kindnet-9b2pr,Uid:e23e9c49-0b7d-46ca-ae62-11e9b26a1280,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997770380748,Labels:map[string]string{app: kindnet,controller-revision-hash: bb65b84c4,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,k8s-app: kindnet,pod-template-generation: 1,tier: node,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:19:00.650209605Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&PodSandboxMetadata{Name:kube-apiserver-ha-105786,Uid:6da
c53b7248a384afeccfc55d43bb2fb,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997711563691,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.39.170:8443,kubernetes.io/config.hash: 6dac53b7248a384afeccfc55d43bb2fb,kubernetes.io/config.seen: 2024-03-14T18:18:50.238635960Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&PodSandboxMetadata{Name:kube-proxy-hd8mx,Uid:3e003f67-93dd-4105-a7bd-68d9af563ea4,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997709812403,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.
name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:19:00.519257450Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-105786,Uid:ec78945afcff39cee32fcf6f6d645c30,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997697898517,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ec78945afcff39cee32fcf6f6d645c30,kubernetes.io/config.seen: 2024-03-14T18:18:50.238638790Z,kubernetes.io/config.source: file,},RuntimeHandle
r:,},&PodSandbox{Id:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&PodSandboxMetadata{Name:etcd-ha-105786,Uid:0cd908946f83a665c0ef77bb7bd5e5ea,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440997687102648,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.170:2379,kubernetes.io/config.hash: 0cd908946f83a665c0ef77bb7bd5e5ea,kubernetes.io/config.seen: 2024-03-14T18:18:50.238632058Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&PodSandboxMetadata{Name:kube-vip-ha-105786,Uid:8a8d15e80402cb826977826234ee3c6a,Namespace:kube-system,Attempt:1,},State:SANDBOX_READY,CreatedAt:1710440991133922489,La
bels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{kubernetes.io/config.hash: 8a8d15e80402cb826977826234ee3c6a,kubernetes.io/config.seen: 2024-03-14T18:18:50.238639597Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&PodSandboxMetadata{Name:busybox-5b5d89c9d6-4h99c,Uid:6f1d3430-1aec-4155-8b75-951d851d54ae,Namespace:default,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710440495105633530,Labels:map[string]string{app: busybox,io.kubernetes.container.name: POD,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,pod-template-hash: 5b5d89c9d6,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:21:34.785609145Z,kubernetes.io/config.s
ource: api,},RuntimeHandler:,},&PodSandbox{Id:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-jsddl,Uid:bdbdea16-97b0-4581-8bab-9a472af11004,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710440346377492166,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:19:05.130015748Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-cx8rc,Uid:d2e960de-67a9-4385-ba02-78a744602bcc,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710440346368515418,Labels:map[string]string{io.kubernetes.container.name: POD,
io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:19:05.130122921Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&PodSandboxMetadata{Name:kube-proxy-hd8mx,Uid:3e003f67-93dd-4105-a7bd-68d9af563ea4,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710440342326310041,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,k8s-app: kube-proxy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T18:19:00.519257450Z,kubernetes.io/config.source: api,},RuntimeHandler
:,},&PodSandbox{Id:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&PodSandboxMetadata{Name:kube-scheduler-ha-105786,Uid:ec78945afcff39cee32fcf6f6d645c30,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710440320420085884,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: ec78945afcff39cee32fcf6f6d645c30,kubernetes.io/config.seen: 2024-03-14T18:18:39.863408059Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&PodSandboxMetadata{Name:etcd-ha-105786,Uid:0cd908946f83a665c0ef77bb7bd5e5ea,Namespace:kube-system,Attempt:0,},State:SANDBOX_NOTREADY,CreatedAt:1710440320359365444,Labels:map[string]string{component: etcd,io.kuber
netes.container.name: POD,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.39.170:2379,kubernetes.io/config.hash: 0cd908946f83a665c0ef77bb7bd5e5ea,kubernetes.io/config.seen: 2024-03-14T18:18:39.863402441Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=aa24c297-8cd7-4061-90e9-ca7a7a50e4a6 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.893306478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8d94b17d-01af-4f08-b790-aed25ec6995f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.893404236Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8d94b17d-01af-4f08-b790-aed25ec6995f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:35:37 ha-105786 crio[4214]: time="2024-03-14 18:35:37.894145077Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:3,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710441062319920150,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath:
/dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3bb0081eb2a088a67783f74ea8762f935ba2267ecda81dd22f600c5e225ef268,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:4,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710441061313986054,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 4,io.kubernetes.container.terminationMessagePath: /dev/termin
ation-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710441043317828919,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/
termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710441039326285982,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.ku
bernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3704dc6ef5119d718f8b0d0aae5da3a8a707b5f368270d58475514bb452335f2,PodSandboxId:690093c450be8968abb2b695fe1d0ae627b0925574a53dbb113a6e2ed523372f,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710441031696877165,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubernetes.container.hash: b378400d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessa
gePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ae7c1c8fbe250bc9559c64f2460871b724ba7766704023a79ee5a0b4a7d75477,PodSandboxId:01cedb7ac228961057795e00fa47f27ad5b32716f299d8b1affec39939e63fe7,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710440999492513300,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 566fc43f-5610-4dcd-b683-1cc87e6ed609,},Annotations:map[string]string{io.kubernetes.container.hash: 472355e1,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: Fil
e,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c,PodSandboxId:6fd8ebd3d137788257c04d1c6c0fb9a943eec6e7a8ab02f748e340ecd9ca9f1c,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710440999035590560,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGr
acePeriod: 30,},},&Container{Id:6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b,PodSandboxId:219b0738aa7794ad4f8f7e9f2694353ebb09e9c48102641fc5718bde200a8caa,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:2,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710440998498516832,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-9b2pr,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e23e9c49-0b7d-46ca-ae62-11e9b26a1280,},Annotations:map[string]string{io.kubernetes.container.hash: a8d7a8ef,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a3c
889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5,PodSandboxId:be55e1f9341f67e6f5a9dfe738d1305aefd4fef05778e37d5855d0414a9633f9,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998568521850,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCoun
t: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd,PodSandboxId:55727b536f41958c8837153c0a07d4d21bc95d4c817dd5e7bf2843cb172c6d05,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710440998573648980,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\
":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942,PodSandboxId:d61621e3b64fc6d5885ac6d555e947fca16f5afbb68a345dfef603215464d4c4,Metadata:&ContainerMetadata{Name:kube-vip,Attempt:8,},Image:&ImageSpec{Image:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba,State:CONTAINER_EXITED,CreatedAt:1710440997796066717,Labels:map[string]string{io.kubernetes.container.name: kube-vip,io.kubernetes.pod.name: kube-vip-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kube
rnetes.pod.uid: 8a8d15e80402cb826977826234ee3c6a,},Annotations:map[string]string{io.kubernetes.container.hash: d7e5eb98,io.kubernetes.container.restartCount: 8,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e,PodSandboxId:dba584eb3e199d5ced7b625227cf12d72e6d041319cfe11f0a8ac5be6f03418d,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710440998481172031,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-ha-105786,io.kubernetes.pod.namespace: kube-system,io
.kubernetes.pod.uid: dc5e46764078ce514b56622c3d7888bf,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5,PodSandboxId:ca368554eaafd4270b5b95a12bef3f0312b090136beded5528a036a29c9e787d,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710440998251923020,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 6
dac53b7248a384afeccfc55d43bb2fb,},Annotations:map[string]string{io.kubernetes.container.hash: 665e552a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32,PodSandboxId:3374ed1f059b9e30b704ffd5c51127b18df195fdd43d2d58fb0ccdf80f47e8bf,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710440998272881059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf
6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659,PodSandboxId:22995c012df03f03dd27a9bb18078490321add377002a5740bb98a5085b2c2c0,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710440998202636620,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kuber
netes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:522fa7bdb84ee0c03c76a3c4bba6eedf350008627fe48f3f2c6751e94617832f,PodSandboxId:c09b6e29d418ae2fcbc936426b0d73c92ba5a327abf94d2e5fa25551d72ca14d,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710440496387804188,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-4h99c,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 6f1d3430-1aec-4155-8b75-951d851d54ae,},Annotations:map[string]string{io.kubern
etes.container.hash: b378400d,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817,PodSandboxId:880e93f2a3ed5c051607d88498f98ea585b2f02ce99459c1b296d097ae69378b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346674843863,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-jsddl,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bdbdea16-97b0-4581-8bab-9a472af11004,},Annotations:map[string]string{io.kubernetes.container.hash: 4d4a3a2e,io
.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b,PodSandboxId:c6041a600821e4d5cc2f6ea1ea49db1278fca6d261a1e61108c832d12e09d1d0,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710440346689801526,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: cored
ns-5dd5756b68-cx8rc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d2e960de-67a9-4385-ba02-78a744602bcc,},Annotations:map[string]string{io.kubernetes.container.hash: 17a7eec1,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9,PodSandboxId:6d43c44b3e99ba1ba46657c260fee8c4760367900cf2d1af8245fe07ab3da4d7,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed8
8d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710440342438475307,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hd8mx,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3e003f67-93dd-4105-a7bd-68d9af563ea4,},Annotations:map[string]string{io.kubernetes.container.hash: 40602197,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2,PodSandboxId:ed2bf5bc80b8ec380fdcfd86171e61b3359ff64d26cacf5971a936b3cd2e93cb,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881
d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710440320771301059,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ec78945afcff39cee32fcf6f6d645c30,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc,PodSandboxId:c3fe1175987dfd7f45072d2a1e0656a3593cd2f0f82e16ab49621563ecbeee62,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONT
AINER_EXITED,CreatedAt:1710440320556512066,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-ha-105786,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 0cd908946f83a665c0ef77bb7bd5e5ea,},Annotations:map[string]string{io.kubernetes.container.hash: a64f9d3e,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8d94b17d-01af-4f08-b790-aed25ec6995f name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	9604ba67bf9c8       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      4 minutes ago       Running             kindnet-cni               3                   219b0738aa779       kindnet-9b2pr
	3bb0081eb2a08       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      4 minutes ago       Running             storage-provisioner       4                   01cedb7ac2289       storage-provisioner
	e23ea5e28f6e5       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      4 minutes ago       Running             kube-controller-manager   2                   dba584eb3e199       kube-controller-manager-ha-105786
	73d5356f4c557       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      4 minutes ago       Running             kube-apiserver            3                   ca368554eaafd       kube-apiserver-ha-105786
	3704dc6ef5119       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      5 minutes ago       Running             busybox                   1                   690093c450be8       busybox-5b5d89c9d6-4h99c
	ae7c1c8fbe250       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      5 minutes ago       Exited              storage-provisioner       3                   01cedb7ac2289       storage-provisioner
	dbe0af9dcb333       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      5 minutes ago       Running             kube-proxy                1                   6fd8ebd3d1377       kube-proxy-hd8mx
	25d7b21ffe66f       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   1                   55727b536f419       coredns-5dd5756b68-cx8rc
	a3c889282a698       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      5 minutes ago       Running             coredns                   1                   be55e1f9341f6       coredns-5dd5756b68-jsddl
	6d30bfdc11c1c       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      5 minutes ago       Exited              kindnet-cni               2                   219b0738aa779       kindnet-9b2pr
	28469939f60bd       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      5 minutes ago       Exited              kube-controller-manager   1                   dba584eb3e199       kube-controller-manager-ha-105786
	4269a70d03936       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      5 minutes ago       Running             kube-scheduler            1                   3374ed1f059b9       kube-scheduler-ha-105786
	782768ee692d7       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      5 minutes ago       Exited              kube-apiserver            2                   ca368554eaafd       kube-apiserver-ha-105786
	56ede7d89c5f7       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      5 minutes ago       Running             etcd                      1                   22995c012df03       etcd-ha-105786
	ded817e115254       22aaebb38f4a9f54562fab7b3a59b206e32f59a368c5749c96d06f5a1c187dba                                      5 minutes ago       Exited              kube-vip                  8                   d61621e3b64fc       kube-vip-ha-105786
	522fa7bdb84ee       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   14 minutes ago      Exited              busybox                   0                   c09b6e29d418a       busybox-5b5d89c9d6-4h99c
	b538852248364       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   c6041a600821e       coredns-5dd5756b68-cx8rc
	4fbdd8b34ac46       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      16 minutes ago      Exited              coredns                   0                   880e93f2a3ed5       coredns-5dd5756b68-jsddl
	50a3dcdc83e53       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      16 minutes ago      Exited              kube-proxy                0                   6d43c44b3e99b       kube-proxy-hd8mx
	3f27ba9bd31a4       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      16 minutes ago      Exited              kube-scheduler            0                   ed2bf5bc80b8e       kube-scheduler-ha-105786
	ff7528019bad0       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      16 minutes ago      Exited              etcd                      0                   c3fe1175987df       etcd-ha-105786
	
	
	==> coredns [25d7b21ffe66f701477f3f72388a31bf9a4fc5cc140bbe4f2da38ec37c5fdefd] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34707 - 24413 "HINFO IN 4449984729202792723.1825095687933891679. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.008723816s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": net/http: TLS handshake timeout
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused - error from a previous attempt: read tcp 10.244.0.6:54512->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [4fbdd8b34ac4616841c79214ad8e8ad0aaddeedf79d2c6e38e16679a12786817] <==
	[INFO] 10.244.0.4:39798 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000049996s
	[INFO] 10.244.0.4:39218 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000088514s
	[INFO] 10.244.2.2:53227 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000129249s
	[INFO] 10.244.1.2:38289 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000162454s
	[INFO] 10.244.1.2:39880 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000157216s
	[INFO] 10.244.0.4:40457 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000166755s
	[INFO] 10.244.0.4:47654 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000165231s
	[INFO] 10.244.2.2:56922 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.00021872s
	[INFO] 10.244.2.2:55729 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000082709s
	[INFO] 10.244.2.2:40076 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000091316s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: Get "https://10.96.0.1:443/api/v1/namespaces?allowWatchBookmarks=true&resourceVersion=1530&timeout=6m55s&timeoutSeconds=415&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1535&timeout=8m55s&timeoutSeconds=535&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[86211054]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:00.833) (total time: 12284ms):
	Trace[86211054]: ---"Objects listed" error:Unauthorized 12284ms (18:28:13.118)
	Trace[86211054]: [12.284846568s] [12.284846568s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Namespace: Unauthorized
	[INFO] plugin/kubernetes: Trace[112395228]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:01.095) (total time: 12023ms):
	Trace[112395228]: ---"Objects listed" error:Unauthorized 12023ms (18:28:13.118)
	Trace[112395228]: [12.023135648s] [12.023135648s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: failed to list *v1.Namespace: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [a3c889282a6985774fbdf231a8378b5a3ac3275afe994249c8d3bfc5a7dae2e5] <==
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:44600 - 56811 "HINFO IN 296503675936248183.3736117151437012322. udp 56 false 512" NXDOMAIN qr,rd,ra 56 0.006872615s
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 10.244.0.5:43650->10.96.0.1:443: read: connection reset by peer
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	[WARNING] plugin/kubernetes: Kubernetes API connection failure: Get "https://10.96.0.1:443/version": dial tcp 10.96.0.1:443: connect: connection refused
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	
	
	==> coredns [b538852248364e0b1756547d9f87693e2d146c0e882f74547d6c5a45f6e3882b] <==
	[INFO] 10.244.2.2:35209 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000219373s
	[INFO] 10.244.1.2:37537 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000135802s
	[INFO] 10.244.1.2:50389 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000105335s
	[INFO] 10.244.0.4:53486 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000200332s
	[INFO] 10.244.0.4:53550 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000308188s
	[INFO] 10.244.2.2:59521 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000134191s
	[INFO] 10.244.1.2:43514 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000127501s
	[INFO] 10.244.1.2:54638 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000089684s
	[INFO] 10.244.1.2:43811 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.000187377s
	[INFO] 10.244.1.2:38538 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000164864s
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: Get "https://10.96.0.1:443/apis/discovery.k8s.io/v1/endpointslices?allowWatchBookmarks=true&resourceVersion=1535&timeout=9m47s&timeoutSeconds=587&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: Get "https://10.96.0.1:443/api/v1/services?allowWatchBookmarks=true&resourceVersion=1529&timeout=8m3s&timeoutSeconds=483&watch=true": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.EndpointSlice: Unauthorized
	[INFO] plugin/kubernetes: Trace[601454333]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:00.779) (total time: 12338ms):
	Trace[601454333]: ---"Objects listed" error:Unauthorized 12338ms (18:28:13.117)
	Trace[601454333]: [12.338439408s] [12.338439408s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Namespace: the server has asked for the client to provide credentials (get namespaces) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	[INFO] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: failed to list *v1.Service: Unauthorized
	[INFO] plugin/kubernetes: Trace[1851486924]: "Reflector ListAndWatch" name:pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169 (14-Mar-2024 18:28:00.685) (total time: 12432ms):
	Trace[1851486924]: ---"Objects listed" error:Unauthorized 12432ms (18:28:13.118)
	Trace[1851486924]: [12.432164604s] [12.432164604s] END
	[ERROR] plugin/kubernetes: pkg/mod/k8s.io/client-go@v0.26.1/tools/cache/reflector.go:169: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               ha-105786
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_18_51_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:18:46 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:35:37 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:18:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:30:43 +0000   Thu, 14 Mar 2024 18:19:05 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.170
	  Hostname:    ha-105786
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 83805f81be844e0c8f423f0d34e721b6
	  System UUID:                83805f81-be84-4e0c-8f42-3f0d34e721b6
	  Boot ID:                    592e9c66-43d6-494c-b6d9-c848f3c684fc
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (11 in total)
	  Namespace                   Name                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                 ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-4h99c             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 coredns-5dd5756b68-cx8rc             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-5dd5756b68-jsddl             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-ha-105786                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kindnet-9b2pr                        100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      16m
	  kube-system                 kube-apiserver-ha-105786             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-ha-105786    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-hd8mx                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-ha-105786             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-vip-ha-105786                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   100m (5%!)(MISSING)
	  memory             290Mi (13%!)(MISSING)  390Mi (18%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 4m53s                  kube-proxy       
	  Normal   Starting                 16m                    kube-proxy       
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m (x8 over 16m)      kubelet          Node ha-105786 status is now: NodeHasSufficientMemory
	  Normal   NodeHasSufficientPID     16m (x7 over 16m)      kubelet          Node ha-105786 status is now: NodeHasSufficientPID
	  Normal   NodeHasNoDiskPressure    16m (x8 over 16m)      kubelet          Node ha-105786 status is now: NodeHasNoDiskPressure
	  Normal   NodeAllocatableEnforced  16m                    kubelet          Updated Node Allocatable limit across pods
	  Normal   Starting                 16m                    kubelet          Starting kubelet.
	  Normal   NodeHasSufficientMemory  16m                    kubelet          Node ha-105786 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    16m                    kubelet          Node ha-105786 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     16m                    kubelet          Node ha-105786 status is now: NodeHasSufficientPID
	  Normal   RegisteredNode           16m                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   NodeReady                16m                    kubelet          Node ha-105786 status is now: NodeReady
	  Normal   RegisteredNode           15m                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   RegisteredNode           14m                    node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Warning  ContainerGCFailed        5m48s (x2 over 6m48s)  kubelet          rpc error: code = Unavailable desc = connection error: desc = "transport: Error while dialing: dial unix /var/run/crio/crio.sock: connect: no such file or directory"
	  Normal   RegisteredNode           4m45s                  node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   RegisteredNode           4m43s                  node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	  Normal   RegisteredNode           3m45s                  node-controller  Node ha-105786 event: Registered Node ha-105786 in Controller
	
	
	Name:               ha-105786-m02
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_20_04_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:19:53 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:35:30 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:31:25 +0000   Thu, 14 Mar 2024 18:30:45 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.245
	  Hostname:    ha-105786-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 d19ca741ee10483194e2397e40db9727
	  System UUID:                d19ca741-ee10-4831-94e2-397e40db9727
	  Boot ID:                    1fd6919e-b89c-4ad1-b096-54490f7c15ac
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                     CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                     ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-k6gxp                 0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         14m
	  kube-system                 etcd-ha-105786-m02                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kindnet-vpgvl                            100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      15m
	  kube-system                 kube-apiserver-ha-105786-m02             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-ha-105786-m02    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-qpz89                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-ha-105786-m02             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-vip-ha-105786-m02                   0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests    Limits
	  --------           --------    ------
	  cpu                750m (37%!)(MISSING)  100m (5%!)(MISSING)
	  memory             150Mi (7%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)      0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)      0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 15m                    kube-proxy       
	  Normal  Starting                 4m38s                  kube-proxy       
	  Normal  RegisteredNode           15m                    node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  RegisteredNode           14m                    node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  NodeNotReady             12m                    node-controller  Node ha-105786-m02 status is now: NodeNotReady
	  Normal  NodeHasNoDiskPressure    5m21s (x8 over 5m21s)  kubelet          Node ha-105786-m02 status is now: NodeHasNoDiskPressure
	  Normal  Starting                 5m21s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  5m21s (x8 over 5m21s)  kubelet          Node ha-105786-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasSufficientPID     5m21s (x7 over 5m21s)  kubelet          Node ha-105786-m02 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  5m21s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           4m45s                  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  RegisteredNode           4m43s                  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	  Normal  RegisteredNode           3m45s                  node-controller  Node ha-105786-m02 event: Registered Node ha-105786-m02 in Controller
	
	
	Name:               ha-105786-m04
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=ha-105786-m04
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=ha-105786
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_22_12_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:22:12 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  ha-105786-m04
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:32:38 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:33:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:33:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:33:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 14 Mar 2024 18:32:18 +0000   Thu, 14 Mar 2024 18:33:20 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.73
	  Hostname:    ha-105786-m04
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 7e09570d6bc045a59dfec434fd490a91
	  System UUID:                7e09570d-6bc0-45a5-9dfe-c434fd490a91
	  Boot ID:                    ebe73455-63b4-449d-a759-d50e720d4746
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.3.0/24
	PodCIDRs:                     10.244.3.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-sft2w    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m10s
	  kube-system                 kindnet-fzjdr               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      13m
	  kube-system                 kube-proxy-bftws            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         13m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type     Reason                   Age                    From             Message
	  ----     ------                   ----                   ----             -------
	  Normal   Starting                 3m16s                  kube-proxy       
	  Normal   Starting                 13m                    kube-proxy       
	  Normal   RegisteredNode           13m                    node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   RegisteredNode           13m                    node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   NodeNotReady             12m                    node-controller  Node ha-105786-m04 status is now: NodeNotReady
	  Normal   NodeHasSufficientPID     11m (x6 over 13m)      kubelet          Node ha-105786-m04 status is now: NodeHasSufficientPID
	  Normal   NodeReady                11m (x2 over 13m)      kubelet          Node ha-105786-m04 status is now: NodeReady
	  Normal   NodeHasSufficientMemory  11m (x6 over 13m)      kubelet          Node ha-105786-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    11m (x6 over 13m)      kubelet          Node ha-105786-m04 status is now: NodeHasNoDiskPressure
	  Normal   RegisteredNode           4m45s                  node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   RegisteredNode           4m43s                  node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   NodeNotReady             4m5s                   node-controller  Node ha-105786-m04 status is now: NodeNotReady
	  Normal   RegisteredNode           3m45s                  node-controller  Node ha-105786-m04 event: Registered Node ha-105786-m04 in Controller
	  Normal   Starting                 3m20s                  kubelet          Starting kubelet.
	  Normal   NodeAllocatableEnforced  3m20s                  kubelet          Updated Node Allocatable limit across pods
	  Normal   NodeHasSufficientMemory  3m20s (x2 over 3m20s)  kubelet          Node ha-105786-m04 status is now: NodeHasSufficientMemory
	  Normal   NodeHasNoDiskPressure    3m20s (x2 over 3m20s)  kubelet          Node ha-105786-m04 status is now: NodeHasNoDiskPressure
	  Normal   NodeHasSufficientPID     3m20s (x2 over 3m20s)  kubelet          Node ha-105786-m04 status is now: NodeHasSufficientPID
	  Warning  Rebooted                 3m20s                  kubelet          Node ha-105786-m04 has been rebooted, boot id: ebe73455-63b4-449d-a759-d50e720d4746
	  Normal   NodeReady                3m20s                  kubelet          Node ha-105786-m04 status is now: NodeReady
	  Normal   NodeNotReady             2m18s                  node-controller  Node ha-105786-m04 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +8.323219] systemd-fstab-generator[597]: Ignoring "noauto" option for root device
	[  +0.062088] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.057119] systemd-fstab-generator[609]: Ignoring "noauto" option for root device
	[  +0.191786] systemd-fstab-generator[623]: Ignoring "noauto" option for root device
	[  +0.127293] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.261879] systemd-fstab-generator[661]: Ignoring "noauto" option for root device
	[  +5.345908] systemd-fstab-generator[761]: Ignoring "noauto" option for root device
	[  +0.065032] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.795309] systemd-fstab-generator[942]: Ignoring "noauto" option for root device
	[  +0.848496] kauditd_printk_skb: 46 callbacks suppressed
	[  +7.157868] kauditd_printk_skb: 51 callbacks suppressed
	[  +2.914153] systemd-fstab-generator[1432]: Ignoring "noauto" option for root device
	[Mar14 18:19] kauditd_printk_skb: 27 callbacks suppressed
	[  +5.437406] kauditd_printk_skb: 73 callbacks suppressed
	[Mar14 18:29] systemd-fstab-generator[4133]: Ignoring "noauto" option for root device
	[  +0.177931] systemd-fstab-generator[4145]: Ignoring "noauto" option for root device
	[  +0.190681] systemd-fstab-generator[4159]: Ignoring "noauto" option for root device
	[  +0.144149] systemd-fstab-generator[4171]: Ignoring "noauto" option for root device
	[  +0.260798] systemd-fstab-generator[4195]: Ignoring "noauto" option for root device
	[  +3.681548] systemd-fstab-generator[4302]: Ignoring "noauto" option for root device
	[  +6.707178] kauditd_printk_skb: 127 callbacks suppressed
	[Mar14 18:30] kauditd_printk_skb: 88 callbacks suppressed
	[ +27.847589] kauditd_printk_skb: 1 callbacks suppressed
	[  +5.765007] kauditd_printk_skb: 1 callbacks suppressed
	
	
	==> etcd [56ede7d89c5f748ac6b6fb5966a6edef355ecdd73707aa895ef5371faabf0659] <==
	{"level":"info","ts":"2024-03-14T18:31:34.698048Z","caller":"rafthttp/stream.go:412","msg":"established TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:31:34.705084Z","caller":"rafthttp/stream.go:249","msg":"set message encoder","from":"6b385368e7357343","to":"49a9455a573a24bd","stream-type":"stream Message"}
	{"level":"info","ts":"2024-03-14T18:31:34.705143Z","caller":"rafthttp/stream.go:274","msg":"established TCP streaming connection with remote peer","stream-writer-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:31:34.713654Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.190:55874","server-name":"","error":"EOF"}
	{"level":"warn","ts":"2024-03-14T18:31:34.744598Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_RAFT_MESSAGE","remote-peer-id":"49a9455a573a24bd","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"warn","ts":"2024-03-14T18:31:34.744677Z","caller":"rafthttp/probing_status.go:68","msg":"prober detected unhealthy status","round-tripper-name":"ROUND_TRIPPER_SNAPSHOT","remote-peer-id":"49a9455a573a24bd","rtt":"0s","error":"dial tcp 192.168.39.190:2380: connect: connection refused"}
	{"level":"info","ts":"2024-03-14T18:31:41.235943Z","caller":"traceutil/trace.go:171","msg":"trace[1249088434] transaction","detail":"{read_only:false; response_revision:1970; number_of_response:1; }","duration":"103.823426ms","start":"2024-03-14T18:31:41.132016Z","end":"2024-03-14T18:31:41.235839Z","steps":["trace[1249088434] 'process raft request'  (duration: 103.664046ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:31:41.235959Z","caller":"traceutil/trace.go:171","msg":"trace[246627116] transaction","detail":"{read_only:false; response_revision:1969; number_of_response:1; }","duration":"110.52398ms","start":"2024-03-14T18:31:41.125415Z","end":"2024-03-14T18:31:41.235939Z","steps":["trace[246627116] 'process raft request'  (duration: 110.04873ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T18:32:32.288844Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b385368e7357343 switched to configuration voters=(6850040302934775540 7726016870774829891)"}
	{"level":"info","ts":"2024-03-14T18:32:32.289343Z","caller":"membership/cluster.go:472","msg":"removed member","cluster-id":"73eff271b33bb37a","local-member-id":"6b385368e7357343","removed-remote-peer-id":"49a9455a573a24bd","removed-remote-peer-urls":["https://192.168.39.190:2380"]}
	{"level":"info","ts":"2024-03-14T18:32:32.289476Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:32:32.289964Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:32:32.290046Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:32:32.290411Z","caller":"rafthttp/stream.go:286","msg":"closed TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:32:32.290472Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:32:32.290572Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:32:32.290852Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd","error":"context canceled"}
	{"level":"warn","ts":"2024-03-14T18:32:32.290931Z","caller":"rafthttp/peer_status.go:66","msg":"peer became inactive (message send to peer failed)","peer-id":"49a9455a573a24bd","error":"failed to read 49a9455a573a24bd on stream MsgApp v2 (context canceled)"}
	{"level":"info","ts":"2024-03-14T18:32:32.290971Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:32:32.291084Z","caller":"rafthttp/stream.go:421","msg":"lost TCP streaming connection with remote peer","stream-reader-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd","error":"context canceled"}
	{"level":"info","ts":"2024-03-14T18:32:32.291104Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:32:32.291125Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:32:32.291137Z","caller":"rafthttp/transport.go:355","msg":"removed remote peer","local-member-id":"6b385368e7357343","removed-remote-peer-id":"49a9455a573a24bd"}
	{"level":"warn","ts":"2024-03-14T18:32:32.302551Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.190:54328","server-name":"","error":"read tcp 192.168.39.170:2380->192.168.39.190:54328: read: connection reset by peer"}
	{"level":"warn","ts":"2024-03-14T18:32:32.30751Z","caller":"embed/config_logging.go:169","msg":"rejected connection","remote-addr":"192.168.39.190:54340","server-name":"","error":"EOF"}
	
	
	==> etcd [ff7528019bad042db647d58473442eee198040c2dc394a64c24ad82ccd8ce0fc] <==
	{"level":"info","ts":"2024-03-14T18:28:15.122911Z","caller":"traceutil/trace.go:171","msg":"trace[256093788] range","detail":"{range_begin:/registry/resourcequotas/; range_end:/registry/resourcequotas0; }","duration":"8.014776074s","start":"2024-03-14T18:28:07.108124Z","end":"2024-03-14T18:28:15.122901Z","steps":["trace[256093788] 'agreement among raft nodes before linearized reading'  (duration: 8.009384348s)"],"step_count":1}
	{"level":"warn","ts":"2024-03-14T18:28:15.122939Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:28:07.108122Z","time spent":"8.014802071s","remote":"127.0.0.1:48602","response type":"/etcdserverpb.KV/Range","request count":0,"request size":57,"response count":0,"response size":0,"request content":"key:\"/registry/resourcequotas/\" range_end:\"/registry/resourcequotas0\" limit:10000 "}
	WARNING: 2024/03/14 18:28:15 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-14T18:28:15.120149Z","caller":"v3rpc/interceptor.go:197","msg":"request stats","start time":"2024-03-14T18:28:07.116653Z","time spent":"8.003484208s","remote":"127.0.0.1:48846","response type":"/etcdserverpb.KV/Range","request count":0,"request size":53,"response count":0,"response size":0,"request content":"key:\"/registry/clusterroles/\" range_end:\"/registry/clusterroles0\" limit:10000 "}
	WARNING: 2024/03/14 18:28:15 [core] grpc: Server.processUnaryRPC failed to write status: connection error: desc = "transport is closing"
	{"level":"warn","ts":"2024-03-14T18:28:15.184325Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.170:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T18:28:15.184437Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.170:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-14T18:28:15.184672Z","caller":"etcdserver/server.go:1456","msg":"skipped leadership transfer; local server is not leader","local-member-id":"6b385368e7357343","current-leader-member-id":"0"}
	{"level":"info","ts":"2024-03-14T18:28:15.185082Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185145Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185176Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.18525Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185323Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185356Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185366Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"5f103b5cc98956f4"}
	{"level":"info","ts":"2024-03-14T18:28:15.185371Z","caller":"rafthttp/peer.go:330","msg":"stopping remote peer","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185379Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream MsgApp v2","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185424Z","caller":"rafthttp/stream.go:294","msg":"stopped TCP streaming connection with remote peer","stream-writer-type":"stream Message","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185472Z","caller":"rafthttp/pipeline.go:85","msg":"stopped HTTP pipelining with remote peer","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185596Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream MsgApp v2","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185822Z","caller":"rafthttp/stream.go:442","msg":"stopped stream reader with remote peer","stream-reader-type":"stream Message","local-member-id":"6b385368e7357343","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.185872Z","caller":"rafthttp/peer.go:335","msg":"stopped remote peer","remote-peer-id":"49a9455a573a24bd"}
	{"level":"info","ts":"2024-03-14T18:28:15.190126Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.170:2380"}
	{"level":"info","ts":"2024-03-14T18:28:15.190236Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.170:2380"}
	{"level":"info","ts":"2024-03-14T18:28:15.190247Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"ha-105786","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.170:2380"],"advertise-client-urls":["https://192.168.39.170:2379"]}
	
	
	==> kernel <==
	 18:35:38 up 17 min,  0 users,  load average: 0.11, 0.33, 0.26
	Linux ha-105786 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [6d30bfdc11c1cfe635d141f53fe2b82002b29955837bf52908ae23b04069b89b] <==
	I0314 18:29:59.107185       1 main.go:150] noMask IPv4 subnets: [10.244.0.0/16]
	I0314 18:30:02.316124       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0314 18:30:05.389271       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	I0314 18:30:16.399467       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": net/http: TLS handshake timeout
	I0314 18:30:23.820510       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host - error from a previous attempt: read tcp 192.168.122.15:54244->10.96.0.1:443: read: connection reset by peer
	I0314 18:30:26.896814       1 main.go:191] Failed to get nodes, retrying after error: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	panic: Reached maximum retries obtaining node list: Get "https://10.96.0.1:443/api/v1/nodes": dial tcp 10.96.0.1:443: connect: no route to host
	
	goroutine 1 [running]:
	main.main()
		/go/src/cmd/kindnetd/main.go:195 +0xd3d
	
	
	==> kindnet [9604ba67bf9c86cbe22d1a792a3c4accd57e405b969c45c0043fdfca89a3d3ad] <==
	I0314 18:34:53.665238       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:35:03.690039       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:35:03.690089       1 main.go:227] handling current node
	I0314 18:35:03.690106       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:35:03.690113       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:35:03.690273       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:35:03.690310       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:35:13.700761       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:35:13.700957       1 main.go:227] handling current node
	I0314 18:35:13.701044       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:35:13.701099       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:35:13.701298       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:35:13.701360       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:35:23.715463       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:35:23.715518       1 main.go:227] handling current node
	I0314 18:35:23.715531       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:35:23.715537       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:35:23.715643       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:35:23.715648       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	I0314 18:35:33.722819       1 main.go:223] Handling node with IPs: map[192.168.39.170:{}]
	I0314 18:35:33.722844       1 main.go:227] handling current node
	I0314 18:35:33.722852       1 main.go:223] Handling node with IPs: map[192.168.39.245:{}]
	I0314 18:35:33.722857       1 main.go:250] Node ha-105786-m02 has CIDR [10.244.1.0/24] 
	I0314 18:35:33.722957       1 main.go:223] Handling node with IPs: map[192.168.39.73:{}]
	I0314 18:35:33.722989       1 main.go:250] Node ha-105786-m04 has CIDR [10.244.3.0/24] 
	
	
	==> kube-apiserver [73d5356f4c557cd12975d52d71d316952e4dc8d7444d139c3c0cae66dec8803f] <==
	E0314 18:30:41.612319       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"context canceled"}: context canceled
	E0314 18:30:41.612403       1 writers.go:122] apiserver was unable to write a JSON response: http: Handler timeout
	I0314 18:30:41.613271       1 shared_informer.go:318] Caches are synced for configmaps
	E0314 18:30:41.613373       1 status.go:71] apiserver received an error that is not an metav1.Status: &errors.errorString{s:"http: Handler timeout"}: http: Handler timeout
	I0314 18:30:41.613420       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 18:30:41.613426       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	E0314 18:30:41.613526       1 writers.go:135] apiserver was unable to write a fallback JSON response: http: Handler timeout
	E0314 18:30:41.619858       1 timeout.go:142] post-timeout activity - time-elapsed: 8.755378ms, GET "/apis/coordination.k8s.io/v1/namespaces/kube-system/leases/plndr-cp-lock" result: <nil>
	I0314 18:30:41.628820       1 shared_informer.go:318] Caches are synced for node_authorizer
	W0314 18:30:41.647391       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.190]
	I0314 18:30:41.648795       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 18:30:41.653582       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 18:30:41.653682       1 aggregator.go:166] initial CRD sync complete...
	I0314 18:30:41.653783       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 18:30:41.653807       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 18:30:41.653831       1 cache.go:39] Caches are synced for autoregister controller
	I0314 18:30:41.671104       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	E0314 18:30:41.690757       1 controller.go:95] Found stale data, removed previous endpoints on kubernetes service, apiserver didn't exit successfully previously
	I0314 18:30:42.509110       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0314 18:30:42.924430       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.170 192.168.39.190 192.168.39.245]
	I0314 18:31:33.971872       1 trace.go:236] Trace[27322001]: "Update" accept:application/vnd.kubernetes.protobuf, */*,audit-id:3a44bd88-72c0-4501-b4d7-81437002d596,client:192.168.39.170,protocol:HTTP/2.0,resource:daemonsets,scope:resource,url:/apis/apps/v1/namespaces/kube-system/daemonsets/kube-proxy/status,user-agent:kube-controller-manager/v1.28.4 (linux/amd64) kubernetes/bae2c62/system:serviceaccount:kube-system:daemon-set-controller,verb:PUT (14-Mar-2024 18:31:33.455) (total time: 516ms):
	Trace[27322001]: ["GuaranteedUpdate etcd3" audit-id:3a44bd88-72c0-4501-b4d7-81437002d596,key:/daemonsets/kube-system/kube-proxy,type:*apps.DaemonSet,resource:daemonsets.apps 515ms (18:31:33.456)
	Trace[27322001]:  ---"Txn call completed" 509ms (18:31:33.971)]
	Trace[27322001]: [516.725224ms] [516.725224ms] END
	E0314 18:32:40.396291       1 watch.go:287] unable to encode watch object *v1.WatchEvent: client disconnected (&streaming.encoderWithAllocator{writer:(*framer.lengthDelimitedFrameWriter)(0xc004666ca8), encoder:(*versioning.codec)(0xc002222280), memAllocator:(*runtime.Allocator)(0xc004666cc0)})
	
	
	==> kube-apiserver [782768ee692d71616b1562024359ce63e377fe3608a900a89cf6959a5feec0a5] <==
	I0314 18:29:59.229131       1 options.go:220] external host was not specified, using 192.168.39.170
	I0314 18:29:59.230543       1 server.go:148] Version: v1.28.4
	I0314 18:29:59.230609       1 server.go:150] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:29:59.822983       1 shared_informer.go:311] Waiting for caches to sync for node_authorizer
	I0314 18:29:59.835789       1 plugins.go:158] Loaded 12 mutating admission controller(s) successfully in the following order: NamespaceLifecycle,LimitRanger,ServiceAccount,NodeRestriction,TaintNodesByCondition,Priority,DefaultTolerationSeconds,DefaultStorageClass,StorageObjectInUseProtection,RuntimeClass,DefaultIngressClass,MutatingAdmissionWebhook.
	I0314 18:29:59.835914       1 plugins.go:161] Loaded 13 validating admission controller(s) successfully in the following order: LimitRanger,ServiceAccount,PodSecurity,Priority,PersistentVolumeClaimResize,RuntimeClass,CertificateApproval,CertificateSigning,ClusterTrustBundleAttest,CertificateSubjectRestriction,ValidatingAdmissionPolicy,ValidatingAdmissionWebhook,ResourceQuota.
	I0314 18:29:59.836208       1 instance.go:298] Using reconciler: lease
	W0314 18:30:19.820801       1 logging.go:59] [core] [Channel #1 SubChannel #4] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context canceled"
	W0314 18:30:19.823745       1 logging.go:59] [core] [Channel #2 SubChannel #3] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	W0314 18:30:19.837423       1 logging.go:59] [core] [Channel #5 SubChannel #6] grpc: addrConn.createTransport failed to connect to {Addr: "127.0.0.1:2379", ServerName: "127.0.0.1", }. Err: connection error: desc = "transport: authentication handshake failed: context deadline exceeded"
	F0314 18:30:19.837443       1 instance.go:291] Error creating leases: error creating storage factory: context deadline exceeded
	
	
	==> kube-controller-manager [28469939f60bd1028645c2fcd66e2b18ea4695d6b9483613a840a93ecd963a1e] <==
	I0314 18:30:00.096891       1 serving.go:348] Generated self-signed cert in-memory
	I0314 18:30:00.761362       1 controllermanager.go:189] "Starting" version="v1.28.4"
	I0314 18:30:00.761449       1 controllermanager.go:191] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:30:00.763371       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 18:30:00.763568       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 18:30:00.764667       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 18:30:00.764857       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	E0314 18:30:20.844984       1 controllermanager.go:235] "Error building controller context" err="failed to wait for apiserver being healthy: timed out waiting for the condition: failed to get apiserver /healthz status: Get \"https://192.168.39.170:8443/healthz\": dial tcp 192.168.39.170:8443: connect: connection refused"
	
	
	==> kube-controller-manager [e23ea5e28f6e5e981b82462c38d332a8f5b100776e7ab6fb76ccac464d622a8f] <==
	I0314 18:32:30.507238       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="40.634266ms"
	I0314 18:32:30.507408       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="54.288µs"
	I0314 18:32:44.012373       1 topologycache.go:237] "Can't get CPU or zone information for node" node="ha-105786-m04"
	I0314 18:32:45.352183       1 event.go:307] "Event occurred" object="ha-105786-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node ha-105786-m03 event: Removing Node ha-105786-m03 from Controller"
	E0314 18:32:55.360653       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:32:55.360820       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:32:55.360851       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:32:55.360877       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:32:55.360908       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:32:55.360932       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362121       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362143       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362155       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362161       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362167       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	E0314 18:33:15.362173       1 gc_controller.go:153] "Failed to get node" err="node \"ha-105786-m03\" not found" node="ha-105786-m03"
	I0314 18:33:20.373390       1 event.go:307] "Event occurred" object="ha-105786-m04" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node ha-105786-m04 status is now: NodeNotReady"
	I0314 18:33:20.401264       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-bftws" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:33:20.419365       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-sft2w" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:33:20.447465       1 event.go:307] "Event occurred" object="kube-system/kindnet-fzjdr" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:33:20.478598       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="60.036857ms"
	I0314 18:33:20.478755       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="107.176µs"
	I0314 18:33:33.523451       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="174.466µs"
	I0314 18:33:33.545209       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="85.966µs"
	I0314 18:33:33.551794       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="157.264µs"
	
	
	==> kube-proxy [50a3dcdc83e53973d325ff99d18bf580a206450c82dc97c1519ca91c42cbc2d9] <==
	I0314 18:19:02.664250       1 node.go:141] Successfully retrieved node IP: 192.168.39.170
	I0314 18:19:02.714117       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:19:02.714162       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:19:02.716766       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:19:02.717745       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:19:02.718063       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:19:02.718100       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:19:02.720154       1 config.go:188] "Starting service config controller"
	I0314 18:19:02.720408       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:19:02.720465       1 config.go:315] "Starting node config controller"
	I0314 18:19:02.720491       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:19:02.721511       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:19:02.721558       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:19:02.820597       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:19:02.820664       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:19:02.822840       1 shared_informer.go:318] Caches are synced for endpoint slice config
	E0314 18:28:13.122344       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: the server has asked for the client to provide credentials (get endpointslices.discovery.k8s.io) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	E0314 18:28:13.122338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: the server has asked for the client to provide credentials (get nodes) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	E0314 18:28:13.122518       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: the server has asked for the client to provide credentials (get services) - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=23, ErrCode=NO_ERROR, debug=""
	W0314 18:28:15.103846       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: Unauthorized
	E0314 18:28:15.103895       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: Unauthorized
	W0314 18:28:15.103973       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.EndpointSlice: Unauthorized
	E0314 18:28:15.103981       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.EndpointSlice: failed to list *v1.EndpointSlice: Unauthorized
	W0314 18:28:15.107552       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Unauthorized
	E0314 18:28:15.107669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Unauthorized
	
	
	==> kube-proxy [dbe0af9dcb3339f8fe976dec492d90fe7673a17deb6921aec0281e22319b535c] <==
	I0314 18:30:00.646660       1 server_others.go:69] "Using iptables proxy"
	E0314 18:30:03.724306       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:06.798355       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:09.869081       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:16.012282       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	E0314 18:30:28.301082       1 node.go:130] Failed to retrieve node info: Get "https://control-plane.minikube.internal:8443/api/v1/nodes/ha-105786": dial tcp 192.168.39.254:8443: connect: no route to host
	I0314 18:30:44.907767       1 node.go:141] Successfully retrieved node IP: 192.168.39.170
	I0314 18:30:44.959317       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:30:44.959379       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:30:44.963966       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:30:44.964077       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:30:44.964340       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:30:44.964376       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:30:44.966061       1 config.go:188] "Starting service config controller"
	I0314 18:30:44.966133       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:30:44.966161       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:30:44.966165       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:30:44.966945       1 config.go:315] "Starting node config controller"
	I0314 18:30:44.966981       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:30:45.066672       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 18:30:45.066839       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:30:45.067090       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [3f27ba9bd31a44ad12372d6c7213ed101d6aac3cfb3cb554d7066e5206d3e9d2] <==
	W0314 18:28:07.898881       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 18:28:07.898938       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 18:28:07.926438       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 18:28:07.926552       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 18:28:08.127945       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 18:28:08.128049       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 18:28:08.703213       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 18:28:08.703272       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:28:08.811171       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 18:28:08.811297       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 18:28:09.018063       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 18:28:09.018138       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 18:28:09.165241       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 18:28:09.165338       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 18:28:09.363080       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 18:28:09.363186       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 18:28:10.170615       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 18:28:10.170677       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 18:28:10.170950       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 18:28:10.171006       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 18:28:10.215817       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 18:28:10.215918       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 18:28:14.418627       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 18:28:14.418766       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 18:28:15.079433       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [4269a70d03936a7320aa62d7a1b29e36694791da3b427440a23c45a80d225b32] <==
	W0314 18:30:38.717457       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: Get "https://192.168.39.170:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:38.717554       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://192.168.39.170:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:38.957547       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: Get "https://192.168.39.170:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:38.957635       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: Get "https://192.168.39.170:8443/api/v1/namespaces/kube-system/configmaps?fieldSelector=metadata.name%!D(MISSING)extension-apiserver-authentication&limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:39.067393       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.170:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:39.067456       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: Get "https://192.168.39.170:8443/apis/storage.k8s.io/v1/csistoragecapacities?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:39.432829       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.170:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	E0314 18:30:39.432878       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: Get "https://192.168.39.170:8443/apis/policy/v1/poddisruptionbudgets?limit=500&resourceVersion=0": dial tcp 192.168.39.170:8443: connect: connection refused
	W0314 18:30:41.548905       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 18:30:41.548991       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 18:30:41.549072       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 18:30:41.549103       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:30:41.549174       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 18:30:41.549221       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 18:30:41.549408       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 18:30:41.549568       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 18:30:41.549594       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 18:30:41.549738       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 18:30:41.559235       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 18:30:41.560822       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	I0314 18:30:52.555852       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	E0314 18:32:28.931399       1 framework.go:1206] "Plugin Failed" err="Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-sft2w\": pod busybox-5b5d89c9d6-sft2w is already assigned to node \"ha-105786-m04\"" plugin="DefaultBinder" pod="default/busybox-5b5d89c9d6-sft2w" node="ha-105786-m04"
	E0314 18:32:28.931581       1 schedule_one.go:319] "scheduler cache ForgetPod failed" err="pod 038d6fed-3cc9-42bc-97bd-27205fe77213(default/busybox-5b5d89c9d6-sft2w) wasn't assumed so cannot be forgotten"
	E0314 18:32:28.931676       1 schedule_one.go:989] "Error scheduling pod; retrying" err="running Bind plugin \"DefaultBinder\": Operation cannot be fulfilled on pods/binding \"busybox-5b5d89c9d6-sft2w\": pod busybox-5b5d89c9d6-sft2w is already assigned to node \"ha-105786-m04\"" pod="default/busybox-5b5d89c9d6-sft2w"
	I0314 18:32:28.931820       1 schedule_one.go:1002] "Pod has been assigned to node. Abort adding it back to queue." pod="default/busybox-5b5d89c9d6-sft2w" node="ha-105786-m04"
	
	
	==> kubelet <==
	Mar 14 18:33:50 ha-105786 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:33:50 ha-105786 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:33:50 ha-105786 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:33:50 ha-105786 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:34:02 ha-105786 kubelet[1439]: I0314 18:34:02.304411    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:34:02 ha-105786 kubelet[1439]: E0314 18:34:02.304851    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:34:15 ha-105786 kubelet[1439]: I0314 18:34:15.304205    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:34:15 ha-105786 kubelet[1439]: E0314 18:34:15.304958    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:34:27 ha-105786 kubelet[1439]: I0314 18:34:27.303870    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:34:27 ha-105786 kubelet[1439]: E0314 18:34:27.304999    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:34:38 ha-105786 kubelet[1439]: I0314 18:34:38.304239    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:34:38 ha-105786 kubelet[1439]: E0314 18:34:38.306650    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:34:50 ha-105786 kubelet[1439]: E0314 18:34:50.362980    1439 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:34:50 ha-105786 kubelet[1439]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:34:50 ha-105786 kubelet[1439]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:34:50 ha-105786 kubelet[1439]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:34:50 ha-105786 kubelet[1439]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:34:52 ha-105786 kubelet[1439]: I0314 18:34:52.303871    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:34:52 ha-105786 kubelet[1439]: E0314 18:34:52.304563    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:35:03 ha-105786 kubelet[1439]: I0314 18:35:03.304477    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:35:03 ha-105786 kubelet[1439]: E0314 18:35:03.305142    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:35:17 ha-105786 kubelet[1439]: I0314 18:35:17.304490    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:35:17 ha-105786 kubelet[1439]: E0314 18:35:17.305127    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	Mar 14 18:35:31 ha-105786 kubelet[1439]: I0314 18:35:31.304248    1439 scope.go:117] "RemoveContainer" containerID="ded817e115254a7901c4567cebdfa234b3ef342067bac95945d644d4b2994942"
	Mar 14 18:35:31 ha-105786 kubelet[1439]: E0314 18:35:31.304586    1439 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-vip\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=kube-vip pod=kube-vip-ha-105786_kube-system(8a8d15e80402cb826977826234ee3c6a)\"" pod="kube-system/kube-vip-ha-105786" podUID="8a8d15e80402cb826977826234ee3c6a"
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 18:35:37.389112  968373 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18384-942544/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p ha-105786 -n ha-105786
helpers_test.go:261: (dbg) Run:  kubectl --context ha-105786 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMutliControlPlane/serial/StopCluster FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMutliControlPlane/serial/StopCluster (142.02s)

                                                
                                    
x
+
TestMultiNode/serial/RestartKeepsNodes (313.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartKeepsNodes
multinode_test.go:314: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-669543
multinode_test.go:321: (dbg) Run:  out/minikube-linux-amd64 stop -p multinode-669543
E0314 18:50:17.904811  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
multinode_test.go:321: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p multinode-669543: exit status 82 (2m2.721326833s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-669543-m03"  ...
	* Stopping node "multinode-669543-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:323: failed to run minikube stop. args "out/minikube-linux-amd64 node list -p multinode-669543" : exit status 82
multinode_test.go:326: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-669543 --wait=true -v=8 --alsologtostderr
E0314 18:52:14.528501  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:52:14.853987  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
multinode_test.go:326: (dbg) Done: out/minikube-linux-amd64 start -p multinode-669543 --wait=true -v=8 --alsologtostderr: (3m7.997055203s)
multinode_test.go:331: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-669543
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-669543 -n multinode-669543
helpers_test.go:244: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/RestartKeepsNodes]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-669543 logs -n 25: (1.659265865s)
helpers_test.go:252: TestMultiNode/serial/RestartKeepsNodes logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp multinode-669543-m02:/home/docker/cp-test.txt                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1246053536/001/cp-test_multinode-669543-m02.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp multinode-669543-m02:/home/docker/cp-test.txt                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543:/home/docker/cp-test_multinode-669543-m02_multinode-669543.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n multinode-669543 sudo cat                                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | /home/docker/cp-test_multinode-669543-m02_multinode-669543.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp multinode-669543-m02:/home/docker/cp-test.txt                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03:/home/docker/cp-test_multinode-669543-m02_multinode-669543-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n multinode-669543-m03 sudo cat                                   | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | /home/docker/cp-test_multinode-669543-m02_multinode-669543-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp testdata/cp-test.txt                                                | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp multinode-669543-m03:/home/docker/cp-test.txt                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1246053536/001/cp-test_multinode-669543-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp multinode-669543-m03:/home/docker/cp-test.txt                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543:/home/docker/cp-test_multinode-669543-m03_multinode-669543.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n multinode-669543 sudo cat                                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | /home/docker/cp-test_multinode-669543-m03_multinode-669543.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp multinode-669543-m03:/home/docker/cp-test.txt                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m02:/home/docker/cp-test_multinode-669543-m03_multinode-669543-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n multinode-669543-m02 sudo cat                                   | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | /home/docker/cp-test_multinode-669543-m03_multinode-669543-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-669543 node stop m03                                                          | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	| node    | multinode-669543 node start                                                             | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-669543                                                                | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC |                     |
	| stop    | -p multinode-669543                                                                     | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC |                     |
	| start   | -p multinode-669543                                                                     | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:51 UTC | 14 Mar 24 18:55 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-669543                                                                | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:55 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:51:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:51:56.337008  976444 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:51:56.337282  976444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:51:56.337290  976444 out.go:304] Setting ErrFile to fd 2...
	I0314 18:51:56.337295  976444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:51:56.337464  976444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:51:56.338072  976444 out.go:298] Setting JSON to false
	I0314 18:51:56.338950  976444 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":95668,"bootTime":1710346648,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:51:56.339014  976444 start.go:139] virtualization: kvm guest
	I0314 18:51:56.341262  976444 out.go:177] * [multinode-669543] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:51:56.342487  976444 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:51:56.342514  976444 notify.go:220] Checking for updates...
	I0314 18:51:56.343807  976444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:51:56.345213  976444 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:51:56.346371  976444 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:51:56.347557  976444 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:51:56.348749  976444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:51:56.350442  976444 config.go:182] Loaded profile config "multinode-669543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:51:56.350549  976444 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:51:56.351050  976444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:51:56.351122  976444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:51:56.366435  976444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I0314 18:51:56.366900  976444 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:51:56.367630  976444 main.go:141] libmachine: Using API Version  1
	I0314 18:51:56.367667  976444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:51:56.368070  976444 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:51:56.368302  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:51:56.403528  976444 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 18:51:56.404769  976444 start.go:297] selected driver: kvm2
	I0314 18:51:56.404783  976444 start.go:901] validating driver "kvm2" against &{Name:multinode-669543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-669543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:51:56.404906  976444 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:51:56.405206  976444 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:51:56.405266  976444 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 18:51:56.420727  976444 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 18:51:56.421337  976444 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:51:56.421403  976444 cni.go:84] Creating CNI manager for ""
	I0314 18:51:56.421415  976444 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 18:51:56.421465  976444 start.go:340] cluster config:
	{Name:multinode-669543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-669543 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:51:56.421605  976444 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:51:56.427094  976444 out.go:177] * Starting "multinode-669543" primary control-plane node in "multinode-669543" cluster
	I0314 18:51:56.433231  976444 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:51:56.433283  976444 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 18:51:56.433306  976444 cache.go:56] Caching tarball of preloaded images
	I0314 18:51:56.433417  976444 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:51:56.433432  976444 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:51:56.433551  976444 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/config.json ...
	I0314 18:51:56.433776  976444 start.go:360] acquireMachinesLock for multinode-669543: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:51:56.433836  976444 start.go:364] duration metric: took 29.397µs to acquireMachinesLock for "multinode-669543"
	I0314 18:51:56.433855  976444 start.go:96] Skipping create...Using existing machine configuration
	I0314 18:51:56.433861  976444 fix.go:54] fixHost starting: 
	I0314 18:51:56.434162  976444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:51:56.434198  976444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:51:56.448838  976444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42113
	I0314 18:51:56.449295  976444 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:51:56.449802  976444 main.go:141] libmachine: Using API Version  1
	I0314 18:51:56.449853  976444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:51:56.450179  976444 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:51:56.450403  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:51:56.450599  976444 main.go:141] libmachine: (multinode-669543) Calling .GetState
	I0314 18:51:56.452159  976444 fix.go:112] recreateIfNeeded on multinode-669543: state=Running err=<nil>
	W0314 18:51:56.452179  976444 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 18:51:56.454041  976444 out.go:177] * Updating the running kvm2 "multinode-669543" VM ...
	I0314 18:51:56.455190  976444 machine.go:94] provisionDockerMachine start ...
	I0314 18:51:56.455211  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:51:56.455438  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:56.458160  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.458763  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.458804  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.458903  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:51:56.459096  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.459272  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.459404  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:51:56.459581  976444 main.go:141] libmachine: Using SSH client type: native
	I0314 18:51:56.459791  976444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0314 18:51:56.459803  976444 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:51:56.566043  976444 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-669543
	
	I0314 18:51:56.566077  976444 main.go:141] libmachine: (multinode-669543) Calling .GetMachineName
	I0314 18:51:56.566347  976444 buildroot.go:166] provisioning hostname "multinode-669543"
	I0314 18:51:56.566361  976444 main.go:141] libmachine: (multinode-669543) Calling .GetMachineName
	I0314 18:51:56.566555  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:56.569634  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.570048  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.570086  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.570156  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:51:56.570323  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.570479  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.570631  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:51:56.570807  976444 main.go:141] libmachine: Using SSH client type: native
	I0314 18:51:56.571038  976444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0314 18:51:56.571057  976444 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-669543 && echo "multinode-669543" | sudo tee /etc/hostname
	I0314 18:51:56.696244  976444 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-669543
	
	I0314 18:51:56.696273  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:56.699301  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.699733  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.699761  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.699971  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:51:56.700274  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.700458  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.700662  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:51:56.700856  976444 main.go:141] libmachine: Using SSH client type: native
	I0314 18:51:56.701077  976444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0314 18:51:56.701095  976444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-669543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-669543/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-669543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:51:56.805651  976444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:51:56.805699  976444 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:51:56.805736  976444 buildroot.go:174] setting up certificates
	I0314 18:51:56.805750  976444 provision.go:84] configureAuth start
	I0314 18:51:56.805768  976444 main.go:141] libmachine: (multinode-669543) Calling .GetMachineName
	I0314 18:51:56.806086  976444 main.go:141] libmachine: (multinode-669543) Calling .GetIP
	I0314 18:51:56.808899  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.809311  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.809345  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.809555  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:56.812058  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.812486  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.812528  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.812693  976444 provision.go:143] copyHostCerts
	I0314 18:51:56.812729  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:51:56.812782  976444 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 18:51:56.812806  976444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:51:56.812877  976444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:51:56.813040  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:51:56.813081  976444 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 18:51:56.813091  976444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:51:56.813147  976444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:51:56.813211  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:51:56.813228  976444 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 18:51:56.813236  976444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:51:56.813259  976444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:51:56.813327  976444 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.multinode-669543 san=[127.0.0.1 192.168.39.68 localhost minikube multinode-669543]
	I0314 18:51:56.897036  976444 provision.go:177] copyRemoteCerts
	I0314 18:51:56.897094  976444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:51:56.897116  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:56.900099  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.900505  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.900533  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.900717  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:51:56.900933  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.901080  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:51:56.901277  976444 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/multinode-669543/id_rsa Username:docker}
	I0314 18:51:56.988952  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 18:51:56.989036  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:51:57.019036  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 18:51:57.019094  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0314 18:51:57.046423  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 18:51:57.046476  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 18:51:57.076249  976444 provision.go:87] duration metric: took 270.480727ms to configureAuth
	I0314 18:51:57.076281  976444 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:51:57.076518  976444 config.go:182] Loaded profile config "multinode-669543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:51:57.076613  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:57.079411  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:57.079961  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:57.079985  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:57.080176  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:51:57.080395  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:57.080574  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:57.080785  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:51:57.081033  976444 main.go:141] libmachine: Using SSH client type: native
	I0314 18:51:57.081205  976444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0314 18:51:57.081220  976444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:53:27.960112  976444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:53:27.960154  976444 machine.go:97] duration metric: took 1m31.504947384s to provisionDockerMachine
	I0314 18:53:27.960172  976444 start.go:293] postStartSetup for "multinode-669543" (driver="kvm2")
	I0314 18:53:27.960188  976444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:53:27.960249  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:53:27.960674  976444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:53:27.960708  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:53:27.964281  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:27.964955  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:27.965000  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:27.965213  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:53:27.965416  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:53:27.965598  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:53:27.965760  976444 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/multinode-669543/id_rsa Username:docker}
	I0314 18:53:28.048828  976444 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:53:28.053457  976444 command_runner.go:130] > NAME=Buildroot
	I0314 18:53:28.053478  976444 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 18:53:28.053484  976444 command_runner.go:130] > ID=buildroot
	I0314 18:53:28.053488  976444 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 18:53:28.053493  976444 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 18:53:28.053531  976444 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:53:28.053550  976444 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:53:28.053619  976444 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:53:28.053732  976444 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 18:53:28.053746  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /etc/ssl/certs/9513112.pem
	I0314 18:53:28.053860  976444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:53:28.063797  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:53:28.090047  976444 start.go:296] duration metric: took 129.859698ms for postStartSetup
	I0314 18:53:28.090087  976444 fix.go:56] duration metric: took 1m31.656225159s for fixHost
	I0314 18:53:28.090113  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:53:28.092660  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.092999  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:28.093033  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.093164  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:53:28.093361  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:53:28.093544  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:53:28.093728  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:53:28.093897  976444 main.go:141] libmachine: Using SSH client type: native
	I0314 18:53:28.094075  976444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0314 18:53:28.094090  976444 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:53:28.197491  976444 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710442408.169414138
	
	I0314 18:53:28.197528  976444 fix.go:216] guest clock: 1710442408.169414138
	I0314 18:53:28.197539  976444 fix.go:229] Guest: 2024-03-14 18:53:28.169414138 +0000 UTC Remote: 2024-03-14 18:53:28.090091744 +0000 UTC m=+91.804740506 (delta=79.322394ms)
	I0314 18:53:28.197562  976444 fix.go:200] guest clock delta is within tolerance: 79.322394ms
	I0314 18:53:28.197567  976444 start.go:83] releasing machines lock for "multinode-669543", held for 1m31.76371898s
	I0314 18:53:28.197588  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:53:28.197868  976444 main.go:141] libmachine: (multinode-669543) Calling .GetIP
	I0314 18:53:28.200520  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.200957  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:28.200987  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.201102  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:53:28.201766  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:53:28.201943  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:53:28.202053  976444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:53:28.202112  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:53:28.202188  976444 ssh_runner.go:195] Run: cat /version.json
	I0314 18:53:28.202213  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:53:28.204807  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.205144  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:28.205193  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.205217  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.205434  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:53:28.205608  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:53:28.205722  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:28.205759  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.205779  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:53:28.205872  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:53:28.205940  976444 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/multinode-669543/id_rsa Username:docker}
	I0314 18:53:28.205992  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:53:28.206126  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:53:28.206247  976444 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/multinode-669543/id_rsa Username:docker}
	I0314 18:53:28.281887  976444 command_runner.go:130] > {"iso_version": "v1.32.1-1710348681-18375", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "fd5757a6603390a2c0efe3b1e5cdd797538203fd"}
	I0314 18:53:28.282055  976444 ssh_runner.go:195] Run: systemctl --version
	I0314 18:53:28.309346  976444 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 18:53:28.309397  976444 command_runner.go:130] > systemd 252 (252)
	I0314 18:53:28.309435  976444 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0314 18:53:28.309515  976444 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:53:28.475164  976444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 18:53:28.483185  976444 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0314 18:53:28.483638  976444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:53:28.483707  976444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:53:28.493953  976444 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 18:53:28.493973  976444 start.go:494] detecting cgroup driver to use...
	I0314 18:53:28.494019  976444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:53:28.511167  976444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:53:28.525698  976444 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:53:28.525735  976444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:53:28.539532  976444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:53:28.553912  976444 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:53:28.707562  976444 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:53:28.865944  976444 docker.go:233] disabling docker service ...
	I0314 18:53:28.866020  976444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:53:28.886291  976444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:53:28.901462  976444 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:53:29.049987  976444 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:53:29.194314  976444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:53:29.211025  976444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:53:29.232616  976444 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0314 18:53:29.232655  976444 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:53:29.232707  976444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:53:29.246809  976444 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:53:29.246880  976444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:53:29.259639  976444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:53:29.271880  976444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:53:29.283695  976444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:53:29.295810  976444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:53:29.306418  976444 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 18:53:29.306633  976444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:53:29.317772  976444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:53:29.460396  976444 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:53:36.354342  976444 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.893907209s)
	I0314 18:53:36.354379  976444 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:53:36.354440  976444 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:53:36.360026  976444 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0314 18:53:36.360054  976444 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 18:53:36.360063  976444 command_runner.go:130] > Device: 0,22	Inode: 1344        Links: 1
	I0314 18:53:36.360082  976444 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 18:53:36.360094  976444 command_runner.go:130] > Access: 2024-03-14 18:53:36.218361868 +0000
	I0314 18:53:36.360105  976444 command_runner.go:130] > Modify: 2024-03-14 18:53:36.218361868 +0000
	I0314 18:53:36.360115  976444 command_runner.go:130] > Change: 2024-03-14 18:53:36.218361868 +0000
	I0314 18:53:36.360124  976444 command_runner.go:130] >  Birth: -
	I0314 18:53:36.360233  976444 start.go:562] Will wait 60s for crictl version
	I0314 18:53:36.360286  976444 ssh_runner.go:195] Run: which crictl
	I0314 18:53:36.364946  976444 command_runner.go:130] > /usr/bin/crictl
	I0314 18:53:36.365086  976444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:53:36.407205  976444 command_runner.go:130] > Version:  0.1.0
	I0314 18:53:36.407229  976444 command_runner.go:130] > RuntimeName:  cri-o
	I0314 18:53:36.407234  976444 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0314 18:53:36.407239  976444 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 18:53:36.407506  976444 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:53:36.407614  976444 ssh_runner.go:195] Run: crio --version
	I0314 18:53:36.439747  976444 command_runner.go:130] > crio version 1.29.1
	I0314 18:53:36.439768  976444 command_runner.go:130] > Version:        1.29.1
	I0314 18:53:36.439776  976444 command_runner.go:130] > GitCommit:      unknown
	I0314 18:53:36.439782  976444 command_runner.go:130] > GitCommitDate:  unknown
	I0314 18:53:36.439788  976444 command_runner.go:130] > GitTreeState:   clean
	I0314 18:53:36.439798  976444 command_runner.go:130] > BuildDate:      2024-03-13T22:45:41Z
	I0314 18:53:36.439805  976444 command_runner.go:130] > GoVersion:      go1.21.6
	I0314 18:53:36.439810  976444 command_runner.go:130] > Compiler:       gc
	I0314 18:53:36.439815  976444 command_runner.go:130] > Platform:       linux/amd64
	I0314 18:53:36.439819  976444 command_runner.go:130] > Linkmode:       dynamic
	I0314 18:53:36.439823  976444 command_runner.go:130] > BuildTags:      
	I0314 18:53:36.439828  976444 command_runner.go:130] >   containers_image_ostree_stub
	I0314 18:53:36.439833  976444 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0314 18:53:36.439836  976444 command_runner.go:130] >   btrfs_noversion
	I0314 18:53:36.439841  976444 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0314 18:53:36.439845  976444 command_runner.go:130] >   libdm_no_deferred_remove
	I0314 18:53:36.439852  976444 command_runner.go:130] >   seccomp
	I0314 18:53:36.439857  976444 command_runner.go:130] > LDFlags:          unknown
	I0314 18:53:36.439860  976444 command_runner.go:130] > SeccompEnabled:   true
	I0314 18:53:36.439867  976444 command_runner.go:130] > AppArmorEnabled:  false
	I0314 18:53:36.441172  976444 ssh_runner.go:195] Run: crio --version
	I0314 18:53:36.473536  976444 command_runner.go:130] > crio version 1.29.1
	I0314 18:53:36.473558  976444 command_runner.go:130] > Version:        1.29.1
	I0314 18:53:36.473563  976444 command_runner.go:130] > GitCommit:      unknown
	I0314 18:53:36.473567  976444 command_runner.go:130] > GitCommitDate:  unknown
	I0314 18:53:36.473571  976444 command_runner.go:130] > GitTreeState:   clean
	I0314 18:53:36.473577  976444 command_runner.go:130] > BuildDate:      2024-03-13T22:45:41Z
	I0314 18:53:36.473581  976444 command_runner.go:130] > GoVersion:      go1.21.6
	I0314 18:53:36.473585  976444 command_runner.go:130] > Compiler:       gc
	I0314 18:53:36.473604  976444 command_runner.go:130] > Platform:       linux/amd64
	I0314 18:53:36.473609  976444 command_runner.go:130] > Linkmode:       dynamic
	I0314 18:53:36.473617  976444 command_runner.go:130] > BuildTags:      
	I0314 18:53:36.473621  976444 command_runner.go:130] >   containers_image_ostree_stub
	I0314 18:53:36.473625  976444 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0314 18:53:36.473629  976444 command_runner.go:130] >   btrfs_noversion
	I0314 18:53:36.473633  976444 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0314 18:53:36.473637  976444 command_runner.go:130] >   libdm_no_deferred_remove
	I0314 18:53:36.473641  976444 command_runner.go:130] >   seccomp
	I0314 18:53:36.473645  976444 command_runner.go:130] > LDFlags:          unknown
	I0314 18:53:36.473649  976444 command_runner.go:130] > SeccompEnabled:   true
	I0314 18:53:36.473653  976444 command_runner.go:130] > AppArmorEnabled:  false
	I0314 18:53:36.481572  976444 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:53:36.483061  976444 main.go:141] libmachine: (multinode-669543) Calling .GetIP
	I0314 18:53:36.485935  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:36.486331  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:36.486366  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:36.486526  976444 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:53:36.491354  976444 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0314 18:53:36.491512  976444 kubeadm.go:877] updating cluster {Name:multinode-669543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-669543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:53:36.491670  976444 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:53:36.491782  976444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:53:36.547502  976444 command_runner.go:130] > {
	I0314 18:53:36.547528  976444 command_runner.go:130] >   "images": [
	I0314 18:53:36.547536  976444 command_runner.go:130] >     {
	I0314 18:53:36.547548  976444 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0314 18:53:36.547554  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.547562  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0314 18:53:36.547568  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547573  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.547585  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0314 18:53:36.547598  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0314 18:53:36.547607  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547618  976444 command_runner.go:130] >       "size": "65258016",
	I0314 18:53:36.547625  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.547633  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.547642  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.547650  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.547655  976444 command_runner.go:130] >     },
	I0314 18:53:36.547662  976444 command_runner.go:130] >     {
	I0314 18:53:36.547672  976444 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0314 18:53:36.547685  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.547694  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0314 18:53:36.547700  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547707  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.547720  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0314 18:53:36.547733  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0314 18:53:36.547749  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547760  976444 command_runner.go:130] >       "size": "65291810",
	I0314 18:53:36.547767  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.547783  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.547793  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.547800  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.547807  976444 command_runner.go:130] >     },
	I0314 18:53:36.547812  976444 command_runner.go:130] >     {
	I0314 18:53:36.547823  976444 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0314 18:53:36.547834  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.547843  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0314 18:53:36.547849  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547857  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.547872  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0314 18:53:36.547888  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0314 18:53:36.547896  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547903  976444 command_runner.go:130] >       "size": "1363676",
	I0314 18:53:36.547912  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.547919  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.547929  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.547938  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.547947  976444 command_runner.go:130] >     },
	I0314 18:53:36.547953  976444 command_runner.go:130] >     {
	I0314 18:53:36.547967  976444 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0314 18:53:36.547977  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.547986  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0314 18:53:36.547995  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548002  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548019  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0314 18:53:36.548043  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0314 18:53:36.548053  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548060  976444 command_runner.go:130] >       "size": "31470524",
	I0314 18:53:36.548066  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.548073  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548083  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548092  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548112  976444 command_runner.go:130] >     },
	I0314 18:53:36.548121  976444 command_runner.go:130] >     {
	I0314 18:53:36.548133  976444 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0314 18:53:36.548143  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548152  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0314 18:53:36.548161  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548168  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548181  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0314 18:53:36.548197  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0314 18:53:36.548215  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548223  976444 command_runner.go:130] >       "size": "53621675",
	I0314 18:53:36.548236  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.548244  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548253  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548260  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548268  976444 command_runner.go:130] >     },
	I0314 18:53:36.548275  976444 command_runner.go:130] >     {
	I0314 18:53:36.548288  976444 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0314 18:53:36.548297  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548306  976444 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0314 18:53:36.548314  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548323  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548338  976444 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0314 18:53:36.548353  976444 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0314 18:53:36.548361  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548369  976444 command_runner.go:130] >       "size": "295456551",
	I0314 18:53:36.548378  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.548386  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.548394  976444 command_runner.go:130] >       },
	I0314 18:53:36.548402  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548412  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548419  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548428  976444 command_runner.go:130] >     },
	I0314 18:53:36.548435  976444 command_runner.go:130] >     {
	I0314 18:53:36.548449  976444 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0314 18:53:36.548459  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548484  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0314 18:53:36.548494  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548500  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548514  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0314 18:53:36.548529  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0314 18:53:36.548539  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548546  976444 command_runner.go:130] >       "size": "127226832",
	I0314 18:53:36.548556  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.548565  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.548572  976444 command_runner.go:130] >       },
	I0314 18:53:36.548581  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548587  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548595  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548600  976444 command_runner.go:130] >     },
	I0314 18:53:36.548607  976444 command_runner.go:130] >     {
	I0314 18:53:36.548620  976444 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0314 18:53:36.548631  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548642  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0314 18:53:36.548649  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548658  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548692  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0314 18:53:36.548709  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0314 18:53:36.548716  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548723  976444 command_runner.go:130] >       "size": "123261750",
	I0314 18:53:36.548733  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.548740  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.548749  976444 command_runner.go:130] >       },
	I0314 18:53:36.548756  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548766  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548773  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548783  976444 command_runner.go:130] >     },
	I0314 18:53:36.548790  976444 command_runner.go:130] >     {
	I0314 18:53:36.548802  976444 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0314 18:53:36.548812  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548818  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0314 18:53:36.548823  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548834  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548844  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0314 18:53:36.548854  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0314 18:53:36.548858  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548865  976444 command_runner.go:130] >       "size": "74749335",
	I0314 18:53:36.548872  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.548877  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548883  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548889  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548895  976444 command_runner.go:130] >     },
	I0314 18:53:36.548900  976444 command_runner.go:130] >     {
	I0314 18:53:36.548909  976444 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0314 18:53:36.548915  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548922  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0314 18:53:36.548928  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548934  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548944  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0314 18:53:36.548957  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0314 18:53:36.548966  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548974  976444 command_runner.go:130] >       "size": "61551410",
	I0314 18:53:36.548983  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.548991  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.548999  976444 command_runner.go:130] >       },
	I0314 18:53:36.549006  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.549016  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.549023  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.549032  976444 command_runner.go:130] >     },
	I0314 18:53:36.549038  976444 command_runner.go:130] >     {
	I0314 18:53:36.549051  976444 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0314 18:53:36.549061  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.549072  976444 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0314 18:53:36.549082  976444 command_runner.go:130] >       ],
	I0314 18:53:36.549089  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.549104  976444 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0314 18:53:36.549118  976444 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0314 18:53:36.549127  976444 command_runner.go:130] >       ],
	I0314 18:53:36.549142  976444 command_runner.go:130] >       "size": "750414",
	I0314 18:53:36.549152  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.549159  976444 command_runner.go:130] >         "value": "65535"
	I0314 18:53:36.549167  976444 command_runner.go:130] >       },
	I0314 18:53:36.549175  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.549184  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.549191  976444 command_runner.go:130] >       "pinned": true
	I0314 18:53:36.549197  976444 command_runner.go:130] >     }
	I0314 18:53:36.549203  976444 command_runner.go:130] >   ]
	I0314 18:53:36.549210  976444 command_runner.go:130] > }
	I0314 18:53:36.549411  976444 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:53:36.549424  976444 crio.go:415] Images already preloaded, skipping extraction
	I0314 18:53:36.549476  976444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:53:36.585686  976444 command_runner.go:130] > {
	I0314 18:53:36.585709  976444 command_runner.go:130] >   "images": [
	I0314 18:53:36.585713  976444 command_runner.go:130] >     {
	I0314 18:53:36.585721  976444 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0314 18:53:36.585726  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.585732  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0314 18:53:36.585735  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585746  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.585759  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0314 18:53:36.585776  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0314 18:53:36.585781  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585785  976444 command_runner.go:130] >       "size": "65258016",
	I0314 18:53:36.585791  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.585795  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.585804  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.585811  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.585814  976444 command_runner.go:130] >     },
	I0314 18:53:36.585818  976444 command_runner.go:130] >     {
	I0314 18:53:36.585830  976444 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0314 18:53:36.585838  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.585846  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0314 18:53:36.585855  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585861  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.585873  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0314 18:53:36.585886  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0314 18:53:36.585895  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585900  976444 command_runner.go:130] >       "size": "65291810",
	I0314 18:53:36.585910  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.585922  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.585930  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.585938  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.585944  976444 command_runner.go:130] >     },
	I0314 18:53:36.585948  976444 command_runner.go:130] >     {
	I0314 18:53:36.585956  976444 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0314 18:53:36.585961  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.585967  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0314 18:53:36.585973  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585976  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.585983  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0314 18:53:36.585991  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0314 18:53:36.585994  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585998  976444 command_runner.go:130] >       "size": "1363676",
	I0314 18:53:36.586004  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.586008  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586018  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586024  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586027  976444 command_runner.go:130] >     },
	I0314 18:53:36.586031  976444 command_runner.go:130] >     {
	I0314 18:53:36.586037  976444 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0314 18:53:36.586041  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586047  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0314 18:53:36.586053  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586057  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586064  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0314 18:53:36.586089  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0314 18:53:36.586095  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586099  976444 command_runner.go:130] >       "size": "31470524",
	I0314 18:53:36.586103  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.586107  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586111  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586115  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586118  976444 command_runner.go:130] >     },
	I0314 18:53:36.586122  976444 command_runner.go:130] >     {
	I0314 18:53:36.586128  976444 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0314 18:53:36.586135  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586140  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0314 18:53:36.586144  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586149  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586158  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0314 18:53:36.586166  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0314 18:53:36.586171  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586175  976444 command_runner.go:130] >       "size": "53621675",
	I0314 18:53:36.586179  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.586182  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586186  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586190  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586193  976444 command_runner.go:130] >     },
	I0314 18:53:36.586197  976444 command_runner.go:130] >     {
	I0314 18:53:36.586203  976444 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0314 18:53:36.586209  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586214  976444 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0314 18:53:36.586220  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586224  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586230  976444 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0314 18:53:36.586239  976444 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0314 18:53:36.586243  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586247  976444 command_runner.go:130] >       "size": "295456551",
	I0314 18:53:36.586250  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.586254  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.586257  976444 command_runner.go:130] >       },
	I0314 18:53:36.586266  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586273  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586276  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586279  976444 command_runner.go:130] >     },
	I0314 18:53:36.586282  976444 command_runner.go:130] >     {
	I0314 18:53:36.586288  976444 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0314 18:53:36.586293  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586298  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0314 18:53:36.586304  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586308  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586315  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0314 18:53:36.586324  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0314 18:53:36.586327  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586331  976444 command_runner.go:130] >       "size": "127226832",
	I0314 18:53:36.586336  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.586340  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.586345  976444 command_runner.go:130] >       },
	I0314 18:53:36.586349  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586355  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586359  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586365  976444 command_runner.go:130] >     },
	I0314 18:53:36.586369  976444 command_runner.go:130] >     {
	I0314 18:53:36.586377  976444 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0314 18:53:36.586383  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586388  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0314 18:53:36.586394  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586398  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586420  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0314 18:53:36.586433  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0314 18:53:36.586441  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586448  976444 command_runner.go:130] >       "size": "123261750",
	I0314 18:53:36.586452  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.586458  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.586462  976444 command_runner.go:130] >       },
	I0314 18:53:36.586468  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586472  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586482  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586488  976444 command_runner.go:130] >     },
	I0314 18:53:36.586492  976444 command_runner.go:130] >     {
	I0314 18:53:36.586497  976444 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0314 18:53:36.586503  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586508  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0314 18:53:36.586514  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586518  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586528  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0314 18:53:36.586537  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0314 18:53:36.586542  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586547  976444 command_runner.go:130] >       "size": "74749335",
	I0314 18:53:36.586552  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.586557  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586563  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586566  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586572  976444 command_runner.go:130] >     },
	I0314 18:53:36.586575  976444 command_runner.go:130] >     {
	I0314 18:53:36.586581  976444 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0314 18:53:36.586588  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586592  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0314 18:53:36.586598  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586602  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586612  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0314 18:53:36.586622  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0314 18:53:36.586627  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586631  976444 command_runner.go:130] >       "size": "61551410",
	I0314 18:53:36.586638  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.586641  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.586647  976444 command_runner.go:130] >       },
	I0314 18:53:36.586651  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586657  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586661  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586665  976444 command_runner.go:130] >     },
	I0314 18:53:36.586671  976444 command_runner.go:130] >     {
	I0314 18:53:36.586676  976444 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0314 18:53:36.586688  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586706  976444 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0314 18:53:36.586711  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586715  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586724  976444 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0314 18:53:36.586733  976444 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0314 18:53:36.586741  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586747  976444 command_runner.go:130] >       "size": "750414",
	I0314 18:53:36.586750  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.586754  976444 command_runner.go:130] >         "value": "65535"
	I0314 18:53:36.586760  976444 command_runner.go:130] >       },
	I0314 18:53:36.586764  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586770  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586774  976444 command_runner.go:130] >       "pinned": true
	I0314 18:53:36.586780  976444 command_runner.go:130] >     }
	I0314 18:53:36.586783  976444 command_runner.go:130] >   ]
	I0314 18:53:36.586789  976444 command_runner.go:130] > }
	I0314 18:53:36.586951  976444 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:53:36.586964  976444 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:53:36.586972  976444 kubeadm.go:928] updating node { 192.168.39.68 8443 v1.28.4 crio true true} ...
	I0314 18:53:36.587066  976444 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-669543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-669543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:53:36.587125  976444 ssh_runner.go:195] Run: crio config
	I0314 18:53:36.630942  976444 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0314 18:53:36.630980  976444 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0314 18:53:36.630991  976444 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0314 18:53:36.630996  976444 command_runner.go:130] > #
	I0314 18:53:36.631006  976444 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0314 18:53:36.631016  976444 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0314 18:53:36.631025  976444 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0314 18:53:36.631035  976444 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0314 18:53:36.631045  976444 command_runner.go:130] > # reload'.
	I0314 18:53:36.631054  976444 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0314 18:53:36.631067  976444 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0314 18:53:36.631076  976444 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0314 18:53:36.631089  976444 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0314 18:53:36.631094  976444 command_runner.go:130] > [crio]
	I0314 18:53:36.631103  976444 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0314 18:53:36.631114  976444 command_runner.go:130] > # containers images, in this directory.
	I0314 18:53:36.631227  976444 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0314 18:53:36.631270  976444 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0314 18:53:36.631422  976444 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0314 18:53:36.631444  976444 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0314 18:53:36.631717  976444 command_runner.go:130] > # imagestore = ""
	I0314 18:53:36.631730  976444 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0314 18:53:36.631739  976444 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0314 18:53:36.632204  976444 command_runner.go:130] > storage_driver = "overlay"
	I0314 18:53:36.632243  976444 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0314 18:53:36.632253  976444 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0314 18:53:36.632261  976444 command_runner.go:130] > storage_option = [
	I0314 18:53:36.632407  976444 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0314 18:53:36.632524  976444 command_runner.go:130] > ]
	I0314 18:53:36.632542  976444 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0314 18:53:36.632552  976444 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0314 18:53:36.632945  976444 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0314 18:53:36.632959  976444 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0314 18:53:36.632967  976444 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0314 18:53:36.632975  976444 command_runner.go:130] > # always happen on a node reboot
	I0314 18:53:36.633391  976444 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0314 18:53:36.633413  976444 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0314 18:53:36.633424  976444 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0314 18:53:36.633436  976444 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0314 18:53:36.633447  976444 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0314 18:53:36.633463  976444 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0314 18:53:36.633480  976444 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0314 18:53:36.633796  976444 command_runner.go:130] > # internal_wipe = true
	I0314 18:53:36.633813  976444 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0314 18:53:36.633821  976444 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0314 18:53:36.634280  976444 command_runner.go:130] > # internal_repair = false
	I0314 18:53:36.634294  976444 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0314 18:53:36.634304  976444 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0314 18:53:36.634313  976444 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0314 18:53:36.634547  976444 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0314 18:53:36.634565  976444 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0314 18:53:36.634571  976444 command_runner.go:130] > [crio.api]
	I0314 18:53:36.634581  976444 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0314 18:53:36.634878  976444 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0314 18:53:36.634889  976444 command_runner.go:130] > # IP address on which the stream server will listen.
	I0314 18:53:36.635265  976444 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0314 18:53:36.635291  976444 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0314 18:53:36.635301  976444 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0314 18:53:36.635507  976444 command_runner.go:130] > # stream_port = "0"
	I0314 18:53:36.635520  976444 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0314 18:53:36.635885  976444 command_runner.go:130] > # stream_enable_tls = false
	I0314 18:53:36.635909  976444 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0314 18:53:36.636433  976444 command_runner.go:130] > # stream_idle_timeout = ""
	I0314 18:53:36.636454  976444 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0314 18:53:36.636464  976444 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0314 18:53:36.636471  976444 command_runner.go:130] > # minutes.
	I0314 18:53:36.636646  976444 command_runner.go:130] > # stream_tls_cert = ""
	I0314 18:53:36.636667  976444 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0314 18:53:36.636677  976444 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0314 18:53:36.636911  976444 command_runner.go:130] > # stream_tls_key = ""
	I0314 18:53:36.636928  976444 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0314 18:53:36.636938  976444 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0314 18:53:36.636972  976444 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0314 18:53:36.637279  976444 command_runner.go:130] > # stream_tls_ca = ""
	I0314 18:53:36.637297  976444 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0314 18:53:36.637435  976444 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0314 18:53:36.637456  976444 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0314 18:53:36.637611  976444 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0314 18:53:36.637627  976444 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0314 18:53:36.637636  976444 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0314 18:53:36.637643  976444 command_runner.go:130] > [crio.runtime]
	I0314 18:53:36.637663  976444 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0314 18:53:36.637677  976444 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0314 18:53:36.637688  976444 command_runner.go:130] > # "nofile=1024:2048"
	I0314 18:53:36.637698  976444 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0314 18:53:36.637763  976444 command_runner.go:130] > # default_ulimits = [
	I0314 18:53:36.637947  976444 command_runner.go:130] > # ]
	I0314 18:53:36.637963  976444 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0314 18:53:36.638301  976444 command_runner.go:130] > # no_pivot = false
	I0314 18:53:36.638322  976444 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0314 18:53:36.638332  976444 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0314 18:53:36.638345  976444 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0314 18:53:36.638354  976444 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0314 18:53:36.638369  976444 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0314 18:53:36.638384  976444 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0314 18:53:36.638402  976444 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0314 18:53:36.638412  976444 command_runner.go:130] > # Cgroup setting for conmon
	I0314 18:53:36.638426  976444 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0314 18:53:36.638445  976444 command_runner.go:130] > conmon_cgroup = "pod"
	I0314 18:53:36.638458  976444 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0314 18:53:36.638468  976444 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0314 18:53:36.638478  976444 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0314 18:53:36.638488  976444 command_runner.go:130] > conmon_env = [
	I0314 18:53:36.638501  976444 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0314 18:53:36.638510  976444 command_runner.go:130] > ]
	I0314 18:53:36.638518  976444 command_runner.go:130] > # Additional environment variables to set for all the
	I0314 18:53:36.638530  976444 command_runner.go:130] > # containers. These are overridden if set in the
	I0314 18:53:36.638539  976444 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0314 18:53:36.638546  976444 command_runner.go:130] > # default_env = [
	I0314 18:53:36.638550  976444 command_runner.go:130] > # ]
	I0314 18:53:36.638560  976444 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0314 18:53:36.638579  976444 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0314 18:53:36.638588  976444 command_runner.go:130] > # selinux = false
	I0314 18:53:36.638598  976444 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0314 18:53:36.638607  976444 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0314 18:53:36.638620  976444 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0314 18:53:36.638625  976444 command_runner.go:130] > # seccomp_profile = ""
	I0314 18:53:36.638636  976444 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0314 18:53:36.638645  976444 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0314 18:53:36.638657  976444 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0314 18:53:36.638666  976444 command_runner.go:130] > # which might increase security.
	I0314 18:53:36.638673  976444 command_runner.go:130] > # This option is currently deprecated,
	I0314 18:53:36.638686  976444 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0314 18:53:36.638694  976444 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0314 18:53:36.638717  976444 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0314 18:53:36.638730  976444 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0314 18:53:36.638743  976444 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0314 18:53:36.638753  976444 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0314 18:53:36.638764  976444 command_runner.go:130] > # This option supports live configuration reload.
	I0314 18:53:36.638774  976444 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0314 18:53:36.638789  976444 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0314 18:53:36.638799  976444 command_runner.go:130] > # the cgroup blockio controller.
	I0314 18:53:36.638806  976444 command_runner.go:130] > # blockio_config_file = ""
	I0314 18:53:36.638820  976444 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0314 18:53:36.638839  976444 command_runner.go:130] > # blockio parameters.
	I0314 18:53:36.638849  976444 command_runner.go:130] > # blockio_reload = false
	I0314 18:53:36.638859  976444 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0314 18:53:36.638868  976444 command_runner.go:130] > # irqbalance daemon.
	I0314 18:53:36.638876  976444 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0314 18:53:36.638888  976444 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0314 18:53:36.638898  976444 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0314 18:53:36.638911  976444 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0314 18:53:36.638924  976444 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0314 18:53:36.638934  976444 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0314 18:53:36.638942  976444 command_runner.go:130] > # This option supports live configuration reload.
	I0314 18:53:36.638948  976444 command_runner.go:130] > # rdt_config_file = ""
	I0314 18:53:36.638956  976444 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0314 18:53:36.638964  976444 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0314 18:53:36.639002  976444 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0314 18:53:36.639011  976444 command_runner.go:130] > # separate_pull_cgroup = ""
	I0314 18:53:36.639021  976444 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0314 18:53:36.639034  976444 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0314 18:53:36.639040  976444 command_runner.go:130] > # will be added.
	I0314 18:53:36.639050  976444 command_runner.go:130] > # default_capabilities = [
	I0314 18:53:36.639055  976444 command_runner.go:130] > # 	"CHOWN",
	I0314 18:53:36.639064  976444 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0314 18:53:36.639070  976444 command_runner.go:130] > # 	"FSETID",
	I0314 18:53:36.639079  976444 command_runner.go:130] > # 	"FOWNER",
	I0314 18:53:36.639085  976444 command_runner.go:130] > # 	"SETGID",
	I0314 18:53:36.639094  976444 command_runner.go:130] > # 	"SETUID",
	I0314 18:53:36.639100  976444 command_runner.go:130] > # 	"SETPCAP",
	I0314 18:53:36.639105  976444 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0314 18:53:36.639110  976444 command_runner.go:130] > # 	"KILL",
	I0314 18:53:36.639115  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639125  976444 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0314 18:53:36.639143  976444 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0314 18:53:36.639151  976444 command_runner.go:130] > # add_inheritable_capabilities = false
	I0314 18:53:36.639159  976444 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0314 18:53:36.639172  976444 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0314 18:53:36.639179  976444 command_runner.go:130] > # default_sysctls = [
	I0314 18:53:36.639194  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639203  976444 command_runner.go:130] > # List of devices on the host that a
	I0314 18:53:36.639212  976444 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0314 18:53:36.639221  976444 command_runner.go:130] > # allowed_devices = [
	I0314 18:53:36.639227  976444 command_runner.go:130] > # 	"/dev/fuse",
	I0314 18:53:36.639233  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639240  976444 command_runner.go:130] > # List of additional devices. specified as
	I0314 18:53:36.639254  976444 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0314 18:53:36.639266  976444 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0314 18:53:36.639274  976444 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0314 18:53:36.639284  976444 command_runner.go:130] > # additional_devices = [
	I0314 18:53:36.639289  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639299  976444 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0314 18:53:36.639305  976444 command_runner.go:130] > # cdi_spec_dirs = [
	I0314 18:53:36.639314  976444 command_runner.go:130] > # 	"/etc/cdi",
	I0314 18:53:36.639320  976444 command_runner.go:130] > # 	"/var/run/cdi",
	I0314 18:53:36.639326  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639336  976444 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0314 18:53:36.639348  976444 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0314 18:53:36.639354  976444 command_runner.go:130] > # Defaults to false.
	I0314 18:53:36.639365  976444 command_runner.go:130] > # device_ownership_from_security_context = false
	I0314 18:53:36.639374  976444 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0314 18:53:36.639386  976444 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0314 18:53:36.639392  976444 command_runner.go:130] > # hooks_dir = [
	I0314 18:53:36.639402  976444 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0314 18:53:36.639408  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639416  976444 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0314 18:53:36.639429  976444 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0314 18:53:36.639437  976444 command_runner.go:130] > # its default mounts from the following two files:
	I0314 18:53:36.639442  976444 command_runner.go:130] > #
	I0314 18:53:36.639451  976444 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0314 18:53:36.639465  976444 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0314 18:53:36.639477  976444 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0314 18:53:36.639483  976444 command_runner.go:130] > #
	I0314 18:53:36.639490  976444 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0314 18:53:36.639504  976444 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0314 18:53:36.639523  976444 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0314 18:53:36.639538  976444 command_runner.go:130] > #      only add mounts it finds in this file.
	I0314 18:53:36.639542  976444 command_runner.go:130] > #
	I0314 18:53:36.639549  976444 command_runner.go:130] > # default_mounts_file = ""
	I0314 18:53:36.639558  976444 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0314 18:53:36.639568  976444 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0314 18:53:36.639577  976444 command_runner.go:130] > pids_limit = 1024
	I0314 18:53:36.639587  976444 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0314 18:53:36.639597  976444 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0314 18:53:36.639605  976444 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0314 18:53:36.639617  976444 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0314 18:53:36.639622  976444 command_runner.go:130] > # log_size_max = -1
	I0314 18:53:36.639632  976444 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0314 18:53:36.639641  976444 command_runner.go:130] > # log_to_journald = false
	I0314 18:53:36.639650  976444 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0314 18:53:36.639661  976444 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0314 18:53:36.639668  976444 command_runner.go:130] > # Path to directory for container attach sockets.
	I0314 18:53:36.639679  976444 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0314 18:53:36.639687  976444 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0314 18:53:36.639701  976444 command_runner.go:130] > # bind_mount_prefix = ""
	I0314 18:53:36.639712  976444 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0314 18:53:36.639718  976444 command_runner.go:130] > # read_only = false
	I0314 18:53:36.639730  976444 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0314 18:53:36.639739  976444 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0314 18:53:36.639749  976444 command_runner.go:130] > # live configuration reload.
	I0314 18:53:36.639755  976444 command_runner.go:130] > # log_level = "info"
	I0314 18:53:36.639766  976444 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0314 18:53:36.639774  976444 command_runner.go:130] > # This option supports live configuration reload.
	I0314 18:53:36.639780  976444 command_runner.go:130] > # log_filter = ""
	I0314 18:53:36.639789  976444 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0314 18:53:36.639804  976444 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0314 18:53:36.639814  976444 command_runner.go:130] > # separated by comma.
	I0314 18:53:36.639825  976444 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 18:53:36.639834  976444 command_runner.go:130] > # uid_mappings = ""
	I0314 18:53:36.639844  976444 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0314 18:53:36.639856  976444 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0314 18:53:36.639870  976444 command_runner.go:130] > # separated by comma.
	I0314 18:53:36.639893  976444 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 18:53:36.639902  976444 command_runner.go:130] > # gid_mappings = ""
	I0314 18:53:36.639912  976444 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0314 18:53:36.639924  976444 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0314 18:53:36.639935  976444 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0314 18:53:36.639949  976444 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 18:53:36.639958  976444 command_runner.go:130] > # minimum_mappable_uid = -1
	I0314 18:53:36.639966  976444 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0314 18:53:36.639979  976444 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0314 18:53:36.639990  976444 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0314 18:53:36.640003  976444 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 18:53:36.640013  976444 command_runner.go:130] > # minimum_mappable_gid = -1
	I0314 18:53:36.640022  976444 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0314 18:53:36.640034  976444 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0314 18:53:36.640043  976444 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0314 18:53:36.640048  976444 command_runner.go:130] > # ctr_stop_timeout = 30
	I0314 18:53:36.640059  976444 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0314 18:53:36.640068  976444 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0314 18:53:36.640078  976444 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0314 18:53:36.640086  976444 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0314 18:53:36.640094  976444 command_runner.go:130] > drop_infra_ctr = false
	I0314 18:53:36.640103  976444 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0314 18:53:36.640111  976444 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0314 18:53:36.640121  976444 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0314 18:53:36.640131  976444 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0314 18:53:36.640142  976444 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0314 18:53:36.640154  976444 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0314 18:53:36.640163  976444 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0314 18:53:36.640173  976444 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0314 18:53:36.640179  976444 command_runner.go:130] > # shared_cpuset = ""
	I0314 18:53:36.640189  976444 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0314 18:53:36.640196  976444 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0314 18:53:36.640203  976444 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0314 18:53:36.640231  976444 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0314 18:53:36.640249  976444 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0314 18:53:36.640265  976444 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0314 18:53:36.640273  976444 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0314 18:53:36.640278  976444 command_runner.go:130] > # enable_criu_support = false
	I0314 18:53:36.640286  976444 command_runner.go:130] > # Enable/disable the generation of the container,
	I0314 18:53:36.640312  976444 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0314 18:53:36.640323  976444 command_runner.go:130] > # enable_pod_events = false
	I0314 18:53:36.640332  976444 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0314 18:53:36.640357  976444 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0314 18:53:36.640371  976444 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0314 18:53:36.640378  976444 command_runner.go:130] > # default_runtime = "runc"
	I0314 18:53:36.640385  976444 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0314 18:53:36.640396  976444 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0314 18:53:36.640409  976444 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0314 18:53:36.640416  976444 command_runner.go:130] > # creation as a file is not desired either.
	I0314 18:53:36.640428  976444 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0314 18:53:36.640435  976444 command_runner.go:130] > # the hostname is being managed dynamically.
	I0314 18:53:36.640442  976444 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0314 18:53:36.640450  976444 command_runner.go:130] > # ]
	I0314 18:53:36.640461  976444 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0314 18:53:36.640472  976444 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0314 18:53:36.640481  976444 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0314 18:53:36.640486  976444 command_runner.go:130] > # Each entry in the table should follow the format:
	I0314 18:53:36.640492  976444 command_runner.go:130] > #
	I0314 18:53:36.640496  976444 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0314 18:53:36.640501  976444 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0314 18:53:36.640505  976444 command_runner.go:130] > # runtime_type = "oci"
	I0314 18:53:36.640555  976444 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0314 18:53:36.640562  976444 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0314 18:53:36.640567  976444 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0314 18:53:36.640574  976444 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0314 18:53:36.640578  976444 command_runner.go:130] > # monitor_env = []
	I0314 18:53:36.640582  976444 command_runner.go:130] > # privileged_without_host_devices = false
	I0314 18:53:36.640587  976444 command_runner.go:130] > # allowed_annotations = []
	I0314 18:53:36.640592  976444 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0314 18:53:36.640595  976444 command_runner.go:130] > # Where:
	I0314 18:53:36.640600  976444 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0314 18:53:36.640615  976444 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0314 18:53:36.640623  976444 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0314 18:53:36.640630  976444 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0314 18:53:36.640636  976444 command_runner.go:130] > #   in $PATH.
	I0314 18:53:36.640641  976444 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0314 18:53:36.640648  976444 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0314 18:53:36.640654  976444 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0314 18:53:36.640658  976444 command_runner.go:130] > #   state.
	I0314 18:53:36.640664  976444 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0314 18:53:36.640672  976444 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0314 18:53:36.640677  976444 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0314 18:53:36.640682  976444 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0314 18:53:36.640690  976444 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0314 18:53:36.640701  976444 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0314 18:53:36.640708  976444 command_runner.go:130] > #   The currently recognized values are:
	I0314 18:53:36.640714  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0314 18:53:36.640722  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0314 18:53:36.640728  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0314 18:53:36.640735  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0314 18:53:36.640744  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0314 18:53:36.640749  976444 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0314 18:53:36.640758  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0314 18:53:36.640764  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0314 18:53:36.640769  976444 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0314 18:53:36.640777  976444 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0314 18:53:36.640781  976444 command_runner.go:130] > #   deprecated option "conmon".
	I0314 18:53:36.640790  976444 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0314 18:53:36.640796  976444 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0314 18:53:36.640802  976444 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0314 18:53:36.640809  976444 command_runner.go:130] > #   should be moved to the container's cgroup
	I0314 18:53:36.640816  976444 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0314 18:53:36.640823  976444 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0314 18:53:36.640829  976444 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0314 18:53:36.640836  976444 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0314 18:53:36.640839  976444 command_runner.go:130] > #
	I0314 18:53:36.640844  976444 command_runner.go:130] > # Using the seccomp notifier feature:
	I0314 18:53:36.640851  976444 command_runner.go:130] > #
	I0314 18:53:36.640859  976444 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0314 18:53:36.640865  976444 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0314 18:53:36.640871  976444 command_runner.go:130] > #
	I0314 18:53:36.640876  976444 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0314 18:53:36.640883  976444 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0314 18:53:36.640886  976444 command_runner.go:130] > #
	I0314 18:53:36.640891  976444 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0314 18:53:36.640898  976444 command_runner.go:130] > # feature.
	I0314 18:53:36.640900  976444 command_runner.go:130] > #
	I0314 18:53:36.640906  976444 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0314 18:53:36.640914  976444 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0314 18:53:36.640920  976444 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0314 18:53:36.640928  976444 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0314 18:53:36.640933  976444 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0314 18:53:36.640936  976444 command_runner.go:130] > #
	I0314 18:53:36.640941  976444 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0314 18:53:36.640949  976444 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0314 18:53:36.640953  976444 command_runner.go:130] > #
	I0314 18:53:36.640964  976444 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0314 18:53:36.640975  976444 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0314 18:53:36.640983  976444 command_runner.go:130] > #
	I0314 18:53:36.640993  976444 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0314 18:53:36.641004  976444 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0314 18:53:36.641012  976444 command_runner.go:130] > # limitation.
	I0314 18:53:36.641018  976444 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0314 18:53:36.641027  976444 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0314 18:53:36.641036  976444 command_runner.go:130] > runtime_type = "oci"
	I0314 18:53:36.641043  976444 command_runner.go:130] > runtime_root = "/run/runc"
	I0314 18:53:36.641052  976444 command_runner.go:130] > runtime_config_path = ""
	I0314 18:53:36.641058  976444 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0314 18:53:36.641067  976444 command_runner.go:130] > monitor_cgroup = "pod"
	I0314 18:53:36.641072  976444 command_runner.go:130] > monitor_exec_cgroup = ""
	I0314 18:53:36.641078  976444 command_runner.go:130] > monitor_env = [
	I0314 18:53:36.641083  976444 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0314 18:53:36.641087  976444 command_runner.go:130] > ]
	I0314 18:53:36.641098  976444 command_runner.go:130] > privileged_without_host_devices = false
	I0314 18:53:36.641107  976444 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0314 18:53:36.641112  976444 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0314 18:53:36.641117  976444 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0314 18:53:36.641125  976444 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0314 18:53:36.641135  976444 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0314 18:53:36.641141  976444 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0314 18:53:36.641152  976444 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0314 18:53:36.641163  976444 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0314 18:53:36.641170  976444 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0314 18:53:36.641179  976444 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0314 18:53:36.641183  976444 command_runner.go:130] > # Example:
	I0314 18:53:36.641187  976444 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0314 18:53:36.641194  976444 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0314 18:53:36.641198  976444 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0314 18:53:36.641203  976444 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0314 18:53:36.641206  976444 command_runner.go:130] > # cpuset = 0
	I0314 18:53:36.641210  976444 command_runner.go:130] > # cpushares = "0-1"
	I0314 18:53:36.641213  976444 command_runner.go:130] > # Where:
	I0314 18:53:36.641217  976444 command_runner.go:130] > # The workload name is workload-type.
	I0314 18:53:36.641223  976444 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0314 18:53:36.641228  976444 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0314 18:53:36.641233  976444 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0314 18:53:36.641240  976444 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0314 18:53:36.641244  976444 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0314 18:53:36.641249  976444 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0314 18:53:36.641255  976444 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0314 18:53:36.641258  976444 command_runner.go:130] > # Default value is set to true
	I0314 18:53:36.641262  976444 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0314 18:53:36.641267  976444 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0314 18:53:36.641271  976444 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0314 18:53:36.641280  976444 command_runner.go:130] > # Default value is set to 'false'
	I0314 18:53:36.641285  976444 command_runner.go:130] > # disable_hostport_mapping = false
	I0314 18:53:36.641291  976444 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0314 18:53:36.641293  976444 command_runner.go:130] > #
	I0314 18:53:36.641298  976444 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0314 18:53:36.641308  976444 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0314 18:53:36.641314  976444 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0314 18:53:36.641320  976444 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0314 18:53:36.641324  976444 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0314 18:53:36.641327  976444 command_runner.go:130] > [crio.image]
	I0314 18:53:36.641332  976444 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0314 18:53:36.641336  976444 command_runner.go:130] > # default_transport = "docker://"
	I0314 18:53:36.641342  976444 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0314 18:53:36.641347  976444 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0314 18:53:36.641350  976444 command_runner.go:130] > # global_auth_file = ""
	I0314 18:53:36.641355  976444 command_runner.go:130] > # The image used to instantiate infra containers.
	I0314 18:53:36.641362  976444 command_runner.go:130] > # This option supports live configuration reload.
	I0314 18:53:36.641369  976444 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0314 18:53:36.641378  976444 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0314 18:53:36.641387  976444 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0314 18:53:36.641394  976444 command_runner.go:130] > # This option supports live configuration reload.
	I0314 18:53:36.641400  976444 command_runner.go:130] > # pause_image_auth_file = ""
	I0314 18:53:36.641407  976444 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0314 18:53:36.641417  976444 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0314 18:53:36.641426  976444 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0314 18:53:36.641438  976444 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0314 18:53:36.641444  976444 command_runner.go:130] > # pause_command = "/pause"
	I0314 18:53:36.641456  976444 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0314 18:53:36.641467  976444 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0314 18:53:36.641476  976444 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0314 18:53:36.641487  976444 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0314 18:53:36.641505  976444 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0314 18:53:36.641517  976444 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0314 18:53:36.641526  976444 command_runner.go:130] > # pinned_images = [
	I0314 18:53:36.641531  976444 command_runner.go:130] > # ]
	I0314 18:53:36.641542  976444 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0314 18:53:36.641556  976444 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0314 18:53:36.641568  976444 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0314 18:53:36.641580  976444 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0314 18:53:36.641590  976444 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0314 18:53:36.641599  976444 command_runner.go:130] > # signature_policy = ""
	I0314 18:53:36.641613  976444 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0314 18:53:36.641622  976444 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0314 18:53:36.641628  976444 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0314 18:53:36.641637  976444 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0314 18:53:36.641642  976444 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0314 18:53:36.641650  976444 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0314 18:53:36.641655  976444 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0314 18:53:36.641663  976444 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0314 18:53:36.641667  976444 command_runner.go:130] > # changing them here.
	I0314 18:53:36.641673  976444 command_runner.go:130] > # insecure_registries = [
	I0314 18:53:36.641678  976444 command_runner.go:130] > # ]
	I0314 18:53:36.641690  976444 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0314 18:53:36.641706  976444 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0314 18:53:36.641715  976444 command_runner.go:130] > # image_volumes = "mkdir"
	I0314 18:53:36.641722  976444 command_runner.go:130] > # Temporary directory to use for storing big files
	I0314 18:53:36.641732  976444 command_runner.go:130] > # big_files_temporary_dir = ""
	I0314 18:53:36.641741  976444 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0314 18:53:36.641750  976444 command_runner.go:130] > # CNI plugins.
	I0314 18:53:36.641756  976444 command_runner.go:130] > [crio.network]
	I0314 18:53:36.641768  976444 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0314 18:53:36.641779  976444 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0314 18:53:36.641788  976444 command_runner.go:130] > # cni_default_network = ""
	I0314 18:53:36.641800  976444 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0314 18:53:36.641807  976444 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0314 18:53:36.641812  976444 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0314 18:53:36.641818  976444 command_runner.go:130] > # plugin_dirs = [
	I0314 18:53:36.641822  976444 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0314 18:53:36.641825  976444 command_runner.go:130] > # ]
	I0314 18:53:36.641833  976444 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0314 18:53:36.641838  976444 command_runner.go:130] > [crio.metrics]
	I0314 18:53:36.641845  976444 command_runner.go:130] > # Globally enable or disable metrics support.
	I0314 18:53:36.641849  976444 command_runner.go:130] > enable_metrics = true
	I0314 18:53:36.641855  976444 command_runner.go:130] > # Specify enabled metrics collectors.
	I0314 18:53:36.641859  976444 command_runner.go:130] > # Per default all metrics are enabled.
	I0314 18:53:36.641867  976444 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0314 18:53:36.641873  976444 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0314 18:53:36.641887  976444 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0314 18:53:36.641901  976444 command_runner.go:130] > # metrics_collectors = [
	I0314 18:53:36.641907  976444 command_runner.go:130] > # 	"operations",
	I0314 18:53:36.641912  976444 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0314 18:53:36.641918  976444 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0314 18:53:36.641922  976444 command_runner.go:130] > # 	"operations_errors",
	I0314 18:53:36.641928  976444 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0314 18:53:36.641932  976444 command_runner.go:130] > # 	"image_pulls_by_name",
	I0314 18:53:36.641936  976444 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0314 18:53:36.641940  976444 command_runner.go:130] > # 	"image_pulls_failures",
	I0314 18:53:36.641944  976444 command_runner.go:130] > # 	"image_pulls_successes",
	I0314 18:53:36.641950  976444 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0314 18:53:36.641955  976444 command_runner.go:130] > # 	"image_layer_reuse",
	I0314 18:53:36.641965  976444 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0314 18:53:36.641971  976444 command_runner.go:130] > # 	"containers_oom_total",
	I0314 18:53:36.641980  976444 command_runner.go:130] > # 	"containers_oom",
	I0314 18:53:36.641987  976444 command_runner.go:130] > # 	"processes_defunct",
	I0314 18:53:36.641996  976444 command_runner.go:130] > # 	"operations_total",
	I0314 18:53:36.642003  976444 command_runner.go:130] > # 	"operations_latency_seconds",
	I0314 18:53:36.642012  976444 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0314 18:53:36.642019  976444 command_runner.go:130] > # 	"operations_errors_total",
	I0314 18:53:36.642028  976444 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0314 18:53:36.642035  976444 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0314 18:53:36.642044  976444 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0314 18:53:36.642051  976444 command_runner.go:130] > # 	"image_pulls_success_total",
	I0314 18:53:36.642060  976444 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0314 18:53:36.642069  976444 command_runner.go:130] > # 	"containers_oom_count_total",
	I0314 18:53:36.642076  976444 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0314 18:53:36.642086  976444 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0314 18:53:36.642091  976444 command_runner.go:130] > # ]
	I0314 18:53:36.642100  976444 command_runner.go:130] > # The port on which the metrics server will listen.
	I0314 18:53:36.642108  976444 command_runner.go:130] > # metrics_port = 9090
	I0314 18:53:36.642113  976444 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0314 18:53:36.642120  976444 command_runner.go:130] > # metrics_socket = ""
	I0314 18:53:36.642125  976444 command_runner.go:130] > # The certificate for the secure metrics server.
	I0314 18:53:36.642130  976444 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0314 18:53:36.642148  976444 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0314 18:53:36.642155  976444 command_runner.go:130] > # certificate on any modification event.
	I0314 18:53:36.642159  976444 command_runner.go:130] > # metrics_cert = ""
	I0314 18:53:36.642166  976444 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0314 18:53:36.642171  976444 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0314 18:53:36.642177  976444 command_runner.go:130] > # metrics_key = ""
	I0314 18:53:36.642182  976444 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0314 18:53:36.642185  976444 command_runner.go:130] > [crio.tracing]
	I0314 18:53:36.642190  976444 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0314 18:53:36.642197  976444 command_runner.go:130] > # enable_tracing = false
	I0314 18:53:36.642202  976444 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0314 18:53:36.642208  976444 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0314 18:53:36.642214  976444 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0314 18:53:36.642219  976444 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0314 18:53:36.642226  976444 command_runner.go:130] > # CRI-O NRI configuration.
	I0314 18:53:36.642229  976444 command_runner.go:130] > [crio.nri]
	I0314 18:53:36.642236  976444 command_runner.go:130] > # Globally enable or disable NRI.
	I0314 18:53:36.642239  976444 command_runner.go:130] > # enable_nri = false
	I0314 18:53:36.642245  976444 command_runner.go:130] > # NRI socket to listen on.
	I0314 18:53:36.642249  976444 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0314 18:53:36.642254  976444 command_runner.go:130] > # NRI plugin directory to use.
	I0314 18:53:36.642259  976444 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0314 18:53:36.642266  976444 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0314 18:53:36.642270  976444 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0314 18:53:36.642278  976444 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0314 18:53:36.642282  976444 command_runner.go:130] > # nri_disable_connections = false
	I0314 18:53:36.642290  976444 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0314 18:53:36.642294  976444 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0314 18:53:36.642301  976444 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0314 18:53:36.642305  976444 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0314 18:53:36.642311  976444 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0314 18:53:36.642315  976444 command_runner.go:130] > [crio.stats]
	I0314 18:53:36.642323  976444 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0314 18:53:36.642328  976444 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0314 18:53:36.642334  976444 command_runner.go:130] > # stats_collection_period = 0
	I0314 18:53:36.642363  976444 command_runner.go:130] ! time="2024-03-14 18:53:36.593997107Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0314 18:53:36.642392  976444 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0314 18:53:36.642550  976444 cni.go:84] Creating CNI manager for ""
	I0314 18:53:36.642567  976444 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 18:53:36.642579  976444 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:53:36.642600  976444 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-669543 NodeName:multinode-669543 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:53:36.642747  976444 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-669543"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:53:36.642817  976444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:53:36.654534  976444 command_runner.go:130] > kubeadm
	I0314 18:53:36.654549  976444 command_runner.go:130] > kubectl
	I0314 18:53:36.654554  976444 command_runner.go:130] > kubelet
	I0314 18:53:36.654579  976444 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:53:36.654639  976444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 18:53:36.665648  976444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0314 18:53:36.683948  976444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:53:36.701596  976444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0314 18:53:36.720366  976444 ssh_runner.go:195] Run: grep 192.168.39.68	control-plane.minikube.internal$ /etc/hosts
	I0314 18:53:36.724918  976444 command_runner.go:130] > 192.168.39.68	control-plane.minikube.internal
	I0314 18:53:36.725001  976444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:53:36.867575  976444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:53:36.883121  976444 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543 for IP: 192.168.39.68
	I0314 18:53:36.883146  976444 certs.go:194] generating shared ca certs ...
	I0314 18:53:36.883169  976444 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:53:36.883375  976444 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:53:36.883432  976444 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:53:36.883447  976444 certs.go:256] generating profile certs ...
	I0314 18:53:36.883555  976444 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/client.key
	I0314 18:53:36.883634  976444 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/apiserver.key.b0a84d17
	I0314 18:53:36.883697  976444 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/proxy-client.key
	I0314 18:53:36.883713  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:53:36.883731  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:53:36.883749  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:53:36.883768  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:53:36.883792  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:53:36.883811  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:53:36.883829  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:53:36.883860  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:53:36.883926  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 18:53:36.883964  976444 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 18:53:36.883978  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:53:36.884093  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:53:36.884171  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:53:36.884237  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:53:36.884312  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:53:36.884358  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:53:36.884381  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem -> /usr/share/ca-certificates/951311.pem
	I0314 18:53:36.884399  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /usr/share/ca-certificates/9513112.pem
	I0314 18:53:36.884978  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:53:36.913755  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:53:36.940272  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:53:36.967729  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:53:36.994769  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 18:53:37.021693  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 18:53:37.047860  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:53:37.074012  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 18:53:37.099866  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:53:37.126224  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 18:53:37.152647  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 18:53:37.179632  976444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:53:37.197700  976444 ssh_runner.go:195] Run: openssl version
	I0314 18:53:37.203789  976444 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 18:53:37.203863  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:53:37.215249  976444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:53:37.220324  976444 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:53:37.220362  976444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:53:37.220408  976444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:53:37.226451  976444 command_runner.go:130] > b5213941
	I0314 18:53:37.226498  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:53:37.236638  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 18:53:37.248291  976444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 18:53:37.253182  976444 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:53:37.253324  976444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:53:37.253375  976444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 18:53:37.259373  976444 command_runner.go:130] > 51391683
	I0314 18:53:37.259591  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 18:53:37.269523  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 18:53:37.281120  976444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 18:53:37.285802  976444 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:53:37.285927  976444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:53:37.285981  976444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 18:53:37.291769  976444 command_runner.go:130] > 3ec20f2e
	I0314 18:53:37.291820  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:53:37.301535  976444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:53:37.306366  976444 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:53:37.306387  976444 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0314 18:53:37.306393  976444 command_runner.go:130] > Device: 253,1	Inode: 3150397     Links: 1
	I0314 18:53:37.306399  976444 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 18:53:37.306410  976444 command_runner.go:130] > Access: 2024-03-14 18:47:15.930651320 +0000
	I0314 18:53:37.306418  976444 command_runner.go:130] > Modify: 2024-03-14 18:47:15.930651320 +0000
	I0314 18:53:37.306423  976444 command_runner.go:130] > Change: 2024-03-14 18:47:15.930651320 +0000
	I0314 18:53:37.306430  976444 command_runner.go:130] >  Birth: 2024-03-14 18:47:15.930651320 +0000
	I0314 18:53:37.306469  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 18:53:37.312488  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.312551  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 18:53:37.318353  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.318560  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 18:53:37.324472  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.324546  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 18:53:37.330269  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.330458  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 18:53:37.336186  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.336529  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 18:53:37.342172  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.342467  976444 kubeadm.go:391] StartCluster: {Name:multinode-669543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-669543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:53:37.342581  976444 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 18:53:37.342634  976444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 18:53:37.380367  976444 command_runner.go:130] > 218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214
	I0314 18:53:37.380395  976444 command_runner.go:130] > ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd
	I0314 18:53:37.380406  976444 command_runner.go:130] > 13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886
	I0314 18:53:37.380415  976444 command_runner.go:130] > 1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434
	I0314 18:53:37.380495  976444 command_runner.go:130] > 38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef
	I0314 18:53:37.380513  976444 command_runner.go:130] > b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9
	I0314 18:53:37.380532  976444 command_runner.go:130] > 35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2
	I0314 18:53:37.380626  976444 command_runner.go:130] > 8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6
	I0314 18:53:37.382000  976444 cri.go:89] found id: "218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214"
	I0314 18:53:37.382016  976444 cri.go:89] found id: "ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd"
	I0314 18:53:37.382022  976444 cri.go:89] found id: "13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886"
	I0314 18:53:37.382027  976444 cri.go:89] found id: "1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434"
	I0314 18:53:37.382032  976444 cri.go:89] found id: "38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef"
	I0314 18:53:37.382036  976444 cri.go:89] found id: "b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9"
	I0314 18:53:37.382040  976444 cri.go:89] found id: "35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2"
	I0314 18:53:37.382043  976444 cri.go:89] found id: "8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6"
	I0314 18:53:37.382047  976444 cri.go:89] found id: ""
	I0314 18:53:37.382094  976444 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.050560523Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710442505050537170,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=67e28911-d008-4756-bae1-c08ac7eabdd9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.051438267Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b70f0d33-4431-4e53-8196-3bcfcf66c9c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.051518413Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b70f0d33-4431-4e53-8196-3bcfcf66c9c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.052083692Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9e5cf013f1ceeeec08fa616e6b0de7f56c87a68d920c9f4982a31bd81d00cc6,PodSandboxId:e3eb8d64101e77db0524de8600058f0d427bef1dd80d224893ca5764391ea2fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710442458240382088,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ebafcfeb0b6dd205e2480d1f847777c8f9965e1d6aee0d9c3fd8f01b80f85,PodSandboxId:d1068f17bc4e14b95579ac61e765edc5d939773a6ae1a2e2f9d05bd0b1268778,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710442424593595683,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c482a385775ca36ae052983d2cc89c99fea3df611d55f304c9173949f2e5bf,PodSandboxId:64c87fe588bf8ad8eee061d825f6d6ce5efa40cdaca1dea321c69fe145d476e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710442424714048238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131b388ec4eb6fa6f24572857eafde3fc98b33a93fc3713b34f2a973166bd0ae,PodSandboxId:7c9f8981a0c43b613368cfe518ebb6315f1fdcacbc7f0c740b339c4eb259a789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710442424562296978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},A
nnotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97671595afeadf32c4ab5578309c623b4f31ff51c80296e5b4bc066b46d80517,PodSandboxId:3e99e94db5408ad8d482343e423f354a977dbbc3a8a8da061e164ba50d2db76b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710442424451646230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.k
ubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabf1dd85b17a993d661d0ddae99290aa9dcfa0228580d94765756c4f67e522e,PodSandboxId:6d354a5baa15a3946e848d3048520203bc7bb10c31b82c8642e78dd21fc3a8ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710442419807296820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2399815b362d2cd5661dc74ad85cc3a552c809a0873cfcd845251e52a7f67bbc,PodSandboxId:60e03a93ae441d04da0750963eb0b1de334dce8d781586ed010b702e40df3b58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710442419829580792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6aa8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd45f1769a65556145848d5f2e852287fb09ec6e09cebcf05fb181ea1c741d30,PodSandboxId:fe6d4a63116924891b7915b61e84d5342e6e8bfd35bd0dd272f8a3dda29c246c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710442419803732647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3bebef91bde0c9fc9055fae97404c7217d388defc5bede42abc3ecada69555,PodSandboxId:27231f52f0d520026560d7af10d929fb227d811317a509a5c0b73bfe36139d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710442419722688656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e15cccd80d48b27e1c5c838985e20b3132623b91268521c631a2983dd8d80cb,PodSandboxId:5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710442109278870608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214,PodSandboxId:b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710442063352535638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd,PodSandboxId:c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710442063292147663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},Annotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886,PodSandboxId:9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710442061564875884,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434,PodSandboxId:a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710442060049101968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.kubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9,PodSandboxId:375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710442039636411679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef,PodSandboxId:c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710442039661252184,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6a
a8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6,PodSandboxId:e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710442039628259186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,
},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2,PodSandboxId:6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710442039632490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b70f0d33-4431-4e53-8196-3bcfcf66c9c6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.097331920Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=80e9b325-b8e5-472f-b911-6c903550ae8e name=/runtime.v1.RuntimeService/Version
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.097427893Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=80e9b325-b8e5-472f-b911-6c903550ae8e name=/runtime.v1.RuntimeService/Version
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.098898758Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=179a72a4-51ae-4a91-96ff-e5032d6ad76c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.099699902Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710442505099675697,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=179a72a4-51ae-4a91-96ff-e5032d6ad76c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.100408814Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2b6a9aa7-83c2-4d8e-af98-6d2991ee805b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.100493528Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2b6a9aa7-83c2-4d8e-af98-6d2991ee805b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.100803215Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9e5cf013f1ceeeec08fa616e6b0de7f56c87a68d920c9f4982a31bd81d00cc6,PodSandboxId:e3eb8d64101e77db0524de8600058f0d427bef1dd80d224893ca5764391ea2fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710442458240382088,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ebafcfeb0b6dd205e2480d1f847777c8f9965e1d6aee0d9c3fd8f01b80f85,PodSandboxId:d1068f17bc4e14b95579ac61e765edc5d939773a6ae1a2e2f9d05bd0b1268778,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710442424593595683,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c482a385775ca36ae052983d2cc89c99fea3df611d55f304c9173949f2e5bf,PodSandboxId:64c87fe588bf8ad8eee061d825f6d6ce5efa40cdaca1dea321c69fe145d476e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710442424714048238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131b388ec4eb6fa6f24572857eafde3fc98b33a93fc3713b34f2a973166bd0ae,PodSandboxId:7c9f8981a0c43b613368cfe518ebb6315f1fdcacbc7f0c740b339c4eb259a789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710442424562296978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},A
nnotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97671595afeadf32c4ab5578309c623b4f31ff51c80296e5b4bc066b46d80517,PodSandboxId:3e99e94db5408ad8d482343e423f354a977dbbc3a8a8da061e164ba50d2db76b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710442424451646230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.k
ubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabf1dd85b17a993d661d0ddae99290aa9dcfa0228580d94765756c4f67e522e,PodSandboxId:6d354a5baa15a3946e848d3048520203bc7bb10c31b82c8642e78dd21fc3a8ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710442419807296820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2399815b362d2cd5661dc74ad85cc3a552c809a0873cfcd845251e52a7f67bbc,PodSandboxId:60e03a93ae441d04da0750963eb0b1de334dce8d781586ed010b702e40df3b58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710442419829580792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6aa8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd45f1769a65556145848d5f2e852287fb09ec6e09cebcf05fb181ea1c741d30,PodSandboxId:fe6d4a63116924891b7915b61e84d5342e6e8bfd35bd0dd272f8a3dda29c246c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710442419803732647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3bebef91bde0c9fc9055fae97404c7217d388defc5bede42abc3ecada69555,PodSandboxId:27231f52f0d520026560d7af10d929fb227d811317a509a5c0b73bfe36139d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710442419722688656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e15cccd80d48b27e1c5c838985e20b3132623b91268521c631a2983dd8d80cb,PodSandboxId:5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710442109278870608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214,PodSandboxId:b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710442063352535638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd,PodSandboxId:c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710442063292147663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},Annotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886,PodSandboxId:9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710442061564875884,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434,PodSandboxId:a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710442060049101968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.kubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9,PodSandboxId:375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710442039636411679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef,PodSandboxId:c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710442039661252184,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6a
a8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6,PodSandboxId:e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710442039628259186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,
},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2,PodSandboxId:6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710442039632490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2b6a9aa7-83c2-4d8e-af98-6d2991ee805b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.150289919Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=50898029-cc55-41ff-827a-1fa9d43cd11e name=/runtime.v1.RuntimeService/Version
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.150360607Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=50898029-cc55-41ff-827a-1fa9d43cd11e name=/runtime.v1.RuntimeService/Version
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.151577098Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=178ae381-4c27-4373-9b54-ffc33dedbec0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.152584322Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710442505152066072,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=178ae381-4c27-4373-9b54-ffc33dedbec0 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.153556556Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1be83ea-da3f-476e-b0d9-ef7ddd876b30 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.153606523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1be83ea-da3f-476e-b0d9-ef7ddd876b30 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.154119848Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9e5cf013f1ceeeec08fa616e6b0de7f56c87a68d920c9f4982a31bd81d00cc6,PodSandboxId:e3eb8d64101e77db0524de8600058f0d427bef1dd80d224893ca5764391ea2fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710442458240382088,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ebafcfeb0b6dd205e2480d1f847777c8f9965e1d6aee0d9c3fd8f01b80f85,PodSandboxId:d1068f17bc4e14b95579ac61e765edc5d939773a6ae1a2e2f9d05bd0b1268778,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710442424593595683,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c482a385775ca36ae052983d2cc89c99fea3df611d55f304c9173949f2e5bf,PodSandboxId:64c87fe588bf8ad8eee061d825f6d6ce5efa40cdaca1dea321c69fe145d476e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710442424714048238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131b388ec4eb6fa6f24572857eafde3fc98b33a93fc3713b34f2a973166bd0ae,PodSandboxId:7c9f8981a0c43b613368cfe518ebb6315f1fdcacbc7f0c740b339c4eb259a789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710442424562296978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},A
nnotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97671595afeadf32c4ab5578309c623b4f31ff51c80296e5b4bc066b46d80517,PodSandboxId:3e99e94db5408ad8d482343e423f354a977dbbc3a8a8da061e164ba50d2db76b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710442424451646230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.k
ubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabf1dd85b17a993d661d0ddae99290aa9dcfa0228580d94765756c4f67e522e,PodSandboxId:6d354a5baa15a3946e848d3048520203bc7bb10c31b82c8642e78dd21fc3a8ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710442419807296820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2399815b362d2cd5661dc74ad85cc3a552c809a0873cfcd845251e52a7f67bbc,PodSandboxId:60e03a93ae441d04da0750963eb0b1de334dce8d781586ed010b702e40df3b58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710442419829580792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6aa8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd45f1769a65556145848d5f2e852287fb09ec6e09cebcf05fb181ea1c741d30,PodSandboxId:fe6d4a63116924891b7915b61e84d5342e6e8bfd35bd0dd272f8a3dda29c246c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710442419803732647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3bebef91bde0c9fc9055fae97404c7217d388defc5bede42abc3ecada69555,PodSandboxId:27231f52f0d520026560d7af10d929fb227d811317a509a5c0b73bfe36139d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710442419722688656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e15cccd80d48b27e1c5c838985e20b3132623b91268521c631a2983dd8d80cb,PodSandboxId:5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710442109278870608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214,PodSandboxId:b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710442063352535638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd,PodSandboxId:c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710442063292147663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},Annotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886,PodSandboxId:9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710442061564875884,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434,PodSandboxId:a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710442060049101968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.kubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9,PodSandboxId:375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710442039636411679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef,PodSandboxId:c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710442039661252184,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6a
a8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6,PodSandboxId:e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710442039628259186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,
},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2,PodSandboxId:6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710442039632490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1be83ea-da3f-476e-b0d9-ef7ddd876b30 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.198643941Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4981e454-c9af-49f1-855d-bf270f745dfa name=/runtime.v1.RuntimeService/Version
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.198738478Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4981e454-c9af-49f1-855d-bf270f745dfa name=/runtime.v1.RuntimeService/Version
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.200375480Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=f5afe847-029d-4c82-923b-1179a559e82a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.201224843Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710442505201201407,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=f5afe847-029d-4c82-923b-1179a559e82a name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.201820551Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=7738203c-6cf7-43fd-be63-e88dcc1c684d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.201874315Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=7738203c-6cf7-43fd-be63-e88dcc1c684d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:55:05 multinode-669543 crio[2871]: time="2024-03-14 18:55:05.202328762Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9e5cf013f1ceeeec08fa616e6b0de7f56c87a68d920c9f4982a31bd81d00cc6,PodSandboxId:e3eb8d64101e77db0524de8600058f0d427bef1dd80d224893ca5764391ea2fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710442458240382088,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ebafcfeb0b6dd205e2480d1f847777c8f9965e1d6aee0d9c3fd8f01b80f85,PodSandboxId:d1068f17bc4e14b95579ac61e765edc5d939773a6ae1a2e2f9d05bd0b1268778,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710442424593595683,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c482a385775ca36ae052983d2cc89c99fea3df611d55f304c9173949f2e5bf,PodSandboxId:64c87fe588bf8ad8eee061d825f6d6ce5efa40cdaca1dea321c69fe145d476e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710442424714048238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131b388ec4eb6fa6f24572857eafde3fc98b33a93fc3713b34f2a973166bd0ae,PodSandboxId:7c9f8981a0c43b613368cfe518ebb6315f1fdcacbc7f0c740b339c4eb259a789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710442424562296978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},A
nnotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97671595afeadf32c4ab5578309c623b4f31ff51c80296e5b4bc066b46d80517,PodSandboxId:3e99e94db5408ad8d482343e423f354a977dbbc3a8a8da061e164ba50d2db76b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710442424451646230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.k
ubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabf1dd85b17a993d661d0ddae99290aa9dcfa0228580d94765756c4f67e522e,PodSandboxId:6d354a5baa15a3946e848d3048520203bc7bb10c31b82c8642e78dd21fc3a8ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710442419807296820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2399815b362d2cd5661dc74ad85cc3a552c809a0873cfcd845251e52a7f67bbc,PodSandboxId:60e03a93ae441d04da0750963eb0b1de334dce8d781586ed010b702e40df3b58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710442419829580792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6aa8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd45f1769a65556145848d5f2e852287fb09ec6e09cebcf05fb181ea1c741d30,PodSandboxId:fe6d4a63116924891b7915b61e84d5342e6e8bfd35bd0dd272f8a3dda29c246c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710442419803732647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3bebef91bde0c9fc9055fae97404c7217d388defc5bede42abc3ecada69555,PodSandboxId:27231f52f0d520026560d7af10d929fb227d811317a509a5c0b73bfe36139d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710442419722688656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e15cccd80d48b27e1c5c838985e20b3132623b91268521c631a2983dd8d80cb,PodSandboxId:5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710442109278870608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214,PodSandboxId:b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710442063352535638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd,PodSandboxId:c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710442063292147663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},Annotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886,PodSandboxId:9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710442061564875884,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434,PodSandboxId:a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710442060049101968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.kubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9,PodSandboxId:375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710442039636411679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef,PodSandboxId:c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710442039661252184,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6a
a8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6,PodSandboxId:e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710442039628259186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,
},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2,PodSandboxId:6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710442039632490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=7738203c-6cf7-43fd-be63-e88dcc1c684d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED              STATE               NAME                      ATTEMPT             POD ID              POD
	c9e5cf013f1ce       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      47 seconds ago       Running             busybox                   1                   e3eb8d64101e7       busybox-5b5d89c9d6-wdd4q
	25c482a385775       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      About a minute ago   Running             coredns                   1                   64c87fe588bf8       coredns-5dd5756b68-z2ssg
	c99ebafcfeb0b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      About a minute ago   Running             kindnet-cni               1                   d1068f17bc4e1       kindnet-j8rsz
	131b388ec4eb6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      About a minute ago   Running             storage-provisioner       1                   7c9f8981a0c43       storage-provisioner
	97671595afead       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      About a minute ago   Running             kube-proxy                1                   3e99e94db5408       kube-proxy-gv9z7
	2399815b362d2       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      About a minute ago   Running             etcd                      1                   60e03a93ae441       etcd-multinode-669543
	dabf1dd85b17a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      About a minute ago   Running             kube-scheduler            1                   6d354a5baa15a       kube-scheduler-multinode-669543
	fd45f1769a655       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      About a minute ago   Running             kube-apiserver            1                   fe6d4a6311692       kube-apiserver-multinode-669543
	7b3bebef91bde       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      About a minute ago   Running             kube-controller-manager   1                   27231f52f0d52       kube-controller-manager-multinode-669543
	2e15cccd80d48       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   6 minutes ago        Exited              busybox                   0                   5ff3ff89264fe       busybox-5b5d89c9d6-wdd4q
	218bd1408ce44       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      7 minutes ago        Exited              coredns                   0                   b65e68430d817       coredns-5dd5756b68-z2ssg
	ede9eb36f24d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      7 minutes ago        Exited              storage-provisioner       0                   c44d003130ba6       storage-provisioner
	13b4bebfdcd4e       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    7 minutes ago        Exited              kindnet-cni               0                   9b9c5bc67c578       kindnet-j8rsz
	1a160e30f8660       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      7 minutes ago        Exited              kube-proxy                0                   a78eb4281690b       kube-proxy-gv9z7
	38944d713ba28       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      7 minutes ago        Exited              etcd                      0                   c80d82d6e24a9       etcd-multinode-669543
	b44b3dc852c67       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      7 minutes ago        Exited              kube-controller-manager   0                   375c781f82868       kube-controller-manager-multinode-669543
	35d37a68a0eec       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      7 minutes ago        Exited              kube-scheduler            0                   6f734f5704330       kube-scheduler-multinode-669543
	8703d57d41951       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      7 minutes ago        Exited              kube-apiserver            0                   e87a510584e32       kube-apiserver-multinode-669543
	
	
	==> coredns [218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214] <==
	[INFO] 10.244.0.3:37224 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001467997s
	[INFO] 10.244.0.3:44702 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000037412s
	[INFO] 10.244.0.3:47299 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000034441s
	[INFO] 10.244.0.3:43385 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00080483s
	[INFO] 10.244.0.3:48298 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000030046s
	[INFO] 10.244.0.3:46943 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000026659s
	[INFO] 10.244.0.3:49889 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138651s
	[INFO] 10.244.1.2:57501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129654s
	[INFO] 10.244.1.2:47443 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167448s
	[INFO] 10.244.1.2:57696 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156261s
	[INFO] 10.244.1.2:51402 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008625s
	[INFO] 10.244.0.3:34523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100518s
	[INFO] 10.244.0.3:40478 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205966s
	[INFO] 10.244.0.3:54659 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114635s
	[INFO] 10.244.0.3:60980 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075611s
	[INFO] 10.244.1.2:33839 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014272s
	[INFO] 10.244.1.2:58938 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235074s
	[INFO] 10.244.1.2:60502 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013409s
	[INFO] 10.244.1.2:44031 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000218951s
	[INFO] 10.244.0.3:43462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128452s
	[INFO] 10.244.0.3:58131 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097696s
	[INFO] 10.244.0.3:60406 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010639s
	[INFO] 10.244.0.3:47260 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085097s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [25c482a385775ca36ae052983d2cc89c99fea3df611d55f304c9173949f2e5bf] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34243 - 23074 "HINFO IN 3443610798101173519.8912060676849101262. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015786475s
	
	
	==> describe nodes <==
	Name:               multinode-669543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-669543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-669543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_47_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:47:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-669543
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:55:05 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:53:43 +0000   Thu, 14 Mar 2024 18:47:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:53:43 +0000   Thu, 14 Mar 2024 18:47:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:53:43 +0000   Thu, 14 Mar 2024 18:47:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:53:43 +0000   Thu, 14 Mar 2024 18:47:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    multinode-669543
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 542fcf9277304e138330d4f556f68ad2
	  System UUID:                542fcf92-7730-4e13-8330-d4f556f68ad2
	  Boot ID:                    b3c9cb6d-9323-4693-b839-d5ce5214638a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-wdd4q                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m38s
	  kube-system                 coredns-5dd5756b68-z2ssg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     7m27s
	  kube-system                 etcd-multinode-669543                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         7m40s
	  kube-system                 kindnet-j8rsz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      7m27s
	  kube-system                 kube-apiserver-multinode-669543             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m41s
	  kube-system                 kube-controller-manager-multinode-669543    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	  kube-system                 kube-proxy-gv9z7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m27s
	  kube-system                 kube-scheduler-multinode-669543             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m40s
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         7m25s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 7m25s              kube-proxy       
	  Normal  Starting                 80s                kube-proxy       
	  Normal  NodeAllocatableEnforced  7m40s              kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  7m40s              kubelet          Node multinode-669543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    7m40s              kubelet          Node multinode-669543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     7m40s              kubelet          Node multinode-669543 status is now: NodeHasSufficientPID
	  Normal  Starting                 7m40s              kubelet          Starting kubelet.
	  Normal  RegisteredNode           7m28s              node-controller  Node multinode-669543 event: Registered Node multinode-669543 in Controller
	  Normal  NodeReady                7m23s              kubelet          Node multinode-669543 status is now: NodeReady
	  Normal  Starting                 87s                kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  87s                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  86s (x8 over 87s)  kubelet          Node multinode-669543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    86s (x8 over 87s)  kubelet          Node multinode-669543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     86s (x7 over 87s)  kubelet          Node multinode-669543 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           69s                node-controller  Node multinode-669543 event: Registered Node multinode-669543 in Controller
	
	
	Name:               multinode-669543-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-669543-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-669543
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_54_27_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:54:27 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-669543-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:54:57 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:54:57 +0000   Thu, 14 Mar 2024 18:54:27 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:54:57 +0000   Thu, 14 Mar 2024 18:54:27 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:54:57 +0000   Thu, 14 Mar 2024 18:54:27 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:54:57 +0000   Thu, 14 Mar 2024 18:54:34 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    multinode-669543-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 701481c581d0418e8c30473e0e0e1f20
	  System UUID:                701481c5-81d0-418e-8c30-473e0e0e1f20
	  Boot ID:                    f70ac467-a980-478a-a70f-2b859eb40567
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-hgm7c    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         44s
	  kube-system                 kindnet-fjd7q               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m48s
	  kube-system                 kube-proxy-r4pb9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m48s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 6m43s                  kube-proxy       
	  Normal  Starting                 33s                    kube-proxy       
	  Normal  NodeHasSufficientMemory  6m48s (x5 over 6m49s)  kubelet          Node multinode-669543-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m48s (x5 over 6m49s)  kubelet          Node multinode-669543-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m48s (x5 over 6m49s)  kubelet          Node multinode-669543-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                6m40s                  kubelet          Node multinode-669543-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  38s (x5 over 40s)      kubelet          Node multinode-669543-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    38s (x5 over 40s)      kubelet          Node multinode-669543-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     38s (x5 over 40s)      kubelet          Node multinode-669543-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           34s                    node-controller  Node multinode-669543-m02 event: Registered Node multinode-669543-m02 in Controller
	  Normal  NodeReady                31s                    kubelet          Node multinode-669543-m02 status is now: NodeReady
	
	
	Name:               multinode-669543-m03
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-669543-m03
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-669543
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_54_57_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:54:56 +0000
	Taints:             node.kubernetes.io/not-ready:NoExecute
	Unschedulable:      false
	Lease:              Failed to get lease: leases.coordination.k8s.io "multinode-669543-m03" not found
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:55:01 +0000   Thu, 14 Mar 2024 18:54:56 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:55:01 +0000   Thu, 14 Mar 2024 18:54:56 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:55:01 +0000   Thu, 14 Mar 2024 18:54:56 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:55:01 +0000   Thu, 14 Mar 2024 18:55:01 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.89
	  Hostname:    multinode-669543-m03
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 bd4e0042e62e4405af9db5cb63803f95
	  System UUID:                bd4e0042-e62e-4405-af9d-b5cb63803f95
	  Boot ID:                    8248aee2-e779-42bd-827c-1c1519028670
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.2.0/24
	PodCIDRs:                     10.244.2.0/24
	Non-terminated Pods:          (2 in total)
	  Namespace                   Name                CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                ------------  ----------  ---------------  -------------  ---
	  kube-system                 kindnet-xcn2w       100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      6m3s
	  kube-system                 kube-proxy-vjshs    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         6m3s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 5m17s                  kube-proxy       
	  Normal  Starting                 5m58s                  kube-proxy       
	  Normal  Starting                 6s                     kube-proxy       
	  Normal  NodeHasSufficientMemory  6m3s (x5 over 6m4s)    kubelet          Node multinode-669543-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    6m3s (x5 over 6m4s)    kubelet          Node multinode-669543-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     6m3s (x5 over 6m4s)    kubelet          Node multinode-669543-m03 status is now: NodeHasSufficientPID
	  Normal  NodeReady                5m56s                  kubelet          Node multinode-669543-m03 status is now: NodeReady
	  Normal  NodeHasSufficientPID     5m21s (x5 over 5m23s)  kubelet          Node multinode-669543-m03 status is now: NodeHasSufficientPID
	  Normal  NodeHasNoDiskPressure    5m21s (x5 over 5m23s)  kubelet          Node multinode-669543-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientMemory  5m21s (x5 over 5m23s)  kubelet          Node multinode-669543-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeReady                5m15s                  kubelet          Node multinode-669543-m03 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  9s (x5 over 10s)       kubelet          Node multinode-669543-m03 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9s (x5 over 10s)       kubelet          Node multinode-669543-m03 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9s (x5 over 10s)       kubelet          Node multinode-669543-m03 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           4s                     node-controller  Node multinode-669543-m03 event: Registered Node multinode-669543-m03 in Controller
	  Normal  NodeReady                4s                     kubelet          Node multinode-669543-m03 status is now: NodeReady
	
	
	==> dmesg <==
	[  +0.062893] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.182414] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.136892] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.257960] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +5.164998] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +0.064438] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.153420] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +1.037506] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.729969] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	[  +0.082774] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.729109] systemd-fstab-generator[1469]: Ignoring "noauto" option for root device
	[  +0.117670] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.603216] kauditd_printk_skb: 82 callbacks suppressed
	[Mar14 18:53] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.160648] systemd-fstab-generator[2805]: Ignoring "noauto" option for root device
	[  +0.176099] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.154255] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +0.261656] systemd-fstab-generator[2856]: Ignoring "noauto" option for root device
	[  +7.405848] systemd-fstab-generator[2958]: Ignoring "noauto" option for root device
	[  +0.087595] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.723058] systemd-fstab-generator[3081]: Ignoring "noauto" option for root device
	[  +5.724329] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.161927] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.592284] systemd-fstab-generator[3915]: Ignoring "noauto" option for root device
	[Mar14 18:54] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2399815b362d2cd5661dc74ad85cc3a552c809a0873cfcd845251e52a7f67bbc] <==
	{"level":"info","ts":"2024-03-14T18:53:40.506528Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T18:53:40.506538Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T18:53:40.506855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 switched to configuration voters=(9375015013596480675)"}
	{"level":"info","ts":"2024-03-14T18:53:40.507007Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68cd46418ae274f9","local-member-id":"821abe7be15f44a3","added-peer-id":"821abe7be15f44a3","added-peer-peer-urls":["https://192.168.39.68:2380"]}
	{"level":"info","ts":"2024-03-14T18:53:40.507128Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68cd46418ae274f9","local-member-id":"821abe7be15f44a3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:53:40.507181Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:53:40.54436Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T18:53:40.544631Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"821abe7be15f44a3","initial-advertise-peer-urls":["https://192.168.39.68:2380"],"listen-peer-urls":["https://192.168.39.68:2380"],"advertise-client-urls":["https://192.168.39.68:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.68:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T18:53:40.544689Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T18:53:40.544812Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-14T18:53:40.544845Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-14T18:53:41.700143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T18:53:41.700211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T18:53:41.700228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 received MsgPreVoteResp from 821abe7be15f44a3 at term 2"}
	{"level":"info","ts":"2024-03-14T18:53:41.700239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T18:53:41.700245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 received MsgVoteResp from 821abe7be15f44a3 at term 3"}
	{"level":"info","ts":"2024-03-14T18:53:41.700253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became leader at term 3"}
	{"level":"info","ts":"2024-03-14T18:53:41.70026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 821abe7be15f44a3 elected leader 821abe7be15f44a3 at term 3"}
	{"level":"info","ts":"2024-03-14T18:53:41.705887Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"821abe7be15f44a3","local-member-attributes":"{Name:multinode-669543 ClientURLs:[https://192.168.39.68:2379]}","request-path":"/0/members/821abe7be15f44a3/attributes","cluster-id":"68cd46418ae274f9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T18:53:41.7059Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:53:41.706198Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:53:41.707622Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.68:2379"}
	{"level":"info","ts":"2024-03-14T18:53:41.70764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T18:53:41.707773Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T18:53:41.707811Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef] <==
	{"level":"info","ts":"2024-03-14T18:47:20.4721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became candidate at term 2"}
	{"level":"info","ts":"2024-03-14T18:47:20.472105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 received MsgVoteResp from 821abe7be15f44a3 at term 2"}
	{"level":"info","ts":"2024-03-14T18:47:20.472113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became leader at term 2"}
	{"level":"info","ts":"2024-03-14T18:47:20.472121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 821abe7be15f44a3 elected leader 821abe7be15f44a3 at term 2"}
	{"level":"info","ts":"2024-03-14T18:47:20.477027Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:47:20.480525Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"821abe7be15f44a3","local-member-attributes":"{Name:multinode-669543 ClientURLs:[https://192.168.39.68:2379]}","request-path":"/0/members/821abe7be15f44a3/attributes","cluster-id":"68cd46418ae274f9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T18:47:20.480581Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:47:20.481719Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T18:47:20.481791Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:47:20.487574Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.68:2379"}
	{"level":"info","ts":"2024-03-14T18:47:20.492037Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T18:47:20.492155Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T18:47:20.49721Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68cd46418ae274f9","local-member-id":"821abe7be15f44a3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:47:20.497451Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:47:20.4976Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:51:57.209849Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-14T18:51:57.21489Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-669543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.68:2380"],"advertise-client-urls":["https://192.168.39.68:2379"]}
	{"level":"warn","ts":"2024-03-14T18:51:57.222407Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T18:51:57.222605Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T18:51:57.306684Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T18:51:57.306755Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-14T18:51:57.308437Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"821abe7be15f44a3","current-leader-member-id":"821abe7be15f44a3"}
	{"level":"info","ts":"2024-03-14T18:51:57.311165Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-14T18:51:57.311398Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-14T18:51:57.311441Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-669543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.68:2380"],"advertise-client-urls":["https://192.168.39.68:2379"]}
	
	
	==> kernel <==
	 18:55:05 up 8 min,  0 users,  load average: 0.10, 0.22, 0.14
	Linux multinode-669543 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886] <==
	I0314 18:51:12.583838       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	I0314 18:51:22.590868       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:51:22.590996       1 main.go:227] handling current node
	I0314 18:51:22.591028       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:51:22.591035       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:51:22.591197       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:51:22.591227       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	I0314 18:51:32.604525       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:51:32.604672       1 main.go:227] handling current node
	I0314 18:51:32.604759       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:51:32.604831       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:51:32.605093       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:51:32.605155       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	I0314 18:51:42.618795       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:51:42.618888       1 main.go:227] handling current node
	I0314 18:51:42.618997       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:51:42.619028       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:51:42.623034       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:51:42.623092       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	I0314 18:51:52.630046       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:51:52.630105       1 main.go:227] handling current node
	I0314 18:51:52.630115       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:51:52.630121       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:51:52.630237       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:51:52.630271       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c99ebafcfeb0b6dd205e2480d1f847777c8f9965e1d6aee0d9c3fd8f01b80f85] <==
	I0314 18:54:25.707245       1 main.go:227] handling current node
	I0314 18:54:25.707297       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:54:25.707332       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	I0314 18:54:35.712814       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:54:35.713147       1 main.go:227] handling current node
	I0314 18:54:35.713204       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:54:35.713230       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:54:35.713353       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:54:35.713382       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	I0314 18:54:45.756341       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:54:45.756483       1 main.go:227] handling current node
	I0314 18:54:45.756530       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:54:45.756566       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:54:45.756773       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:54:45.757169       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	I0314 18:54:55.770572       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:54:55.770805       1 main.go:227] handling current node
	I0314 18:54:55.770842       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:54:55.770901       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:55:05.789863       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:55:05.790011       1 main.go:227] handling current node
	I0314 18:55:05.790022       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:55:05.790029       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:55:05.790397       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:55:05.790409       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.2.0/24] 
	
	
	==> kube-apiserver [8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6] <==
	I0314 18:47:22.366829       1 controller.go:624] quota admission added evaluator for: namespaces
	I0314 18:47:22.371656       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 18:47:22.386354       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 18:47:22.386407       1 aggregator.go:166] initial CRD sync complete...
	I0314 18:47:22.386440       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 18:47:22.386463       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 18:47:22.386485       1 cache.go:39] Caches are synced for autoregister controller
	I0314 18:47:22.405900       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 18:47:22.415663       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 18:47:23.265444       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0314 18:47:23.270234       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0314 18:47:23.270271       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 18:47:23.873525       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 18:47:23.917574       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 18:47:24.010181       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0314 18:47:24.021016       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.68]
	I0314 18:47:24.024801       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 18:47:24.036640       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 18:47:24.336474       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 18:47:25.350055       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 18:47:25.364502       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0314 18:47:25.381164       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 18:47:38.554988       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0314 18:47:38.604244       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0314 18:51:57.214350       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [fd45f1769a65556145848d5f2e852287fb09ec6e09cebcf05fb181ea1c741d30] <==
	I0314 18:53:43.125278       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0314 18:53:43.125563       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0314 18:53:43.125604       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0314 18:53:43.125654       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0314 18:53:43.215530       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 18:53:43.217308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 18:53:43.217609       1 aggregator.go:166] initial CRD sync complete...
	I0314 18:53:43.217656       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 18:53:43.217662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 18:53:43.218092       1 cache.go:39] Caches are synced for autoregister controller
	I0314 18:53:43.231882       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 18:53:43.236731       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 18:53:43.236774       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 18:53:43.241157       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 18:53:43.247454       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 18:53:43.295011       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 18:53:43.307080       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 18:53:44.118327       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 18:53:45.893257       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 18:53:46.018309       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 18:53:46.026517       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 18:53:46.099454       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 18:53:46.109448       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 18:53:56.337123       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 18:53:56.430790       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7b3bebef91bde0c9fc9055fae97404c7217d388defc5bede42abc3ecada69555] <==
	I0314 18:54:26.399314       1 event.go:307] "Event occurred" object="multinode-669543-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-669543-m02 event: Removing Node multinode-669543-m02 from Controller"
	I0314 18:54:26.760118       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="79.553µs"
	I0314 18:54:27.384018       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-669543-m02\" does not exist"
	I0314 18:54:27.384591       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-nslm6" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-nslm6"
	I0314 18:54:27.396369       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-669543-m02" podCIDRs=["10.244.1.0/24"]
	I0314 18:54:29.883763       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="167.285µs"
	I0314 18:54:29.911463       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.367µs"
	I0314 18:54:29.923536       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="56.365µs"
	I0314 18:54:29.935450       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="53.515µs"
	I0314 18:54:29.943343       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="47.414µs"
	I0314 18:54:29.945344       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="29.986µs"
	I0314 18:54:31.400887       1 event.go:307] "Event occurred" object="multinode-669543-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-669543-m02 event: Registered Node multinode-669543-m02 in Controller"
	I0314 18:54:34.818142       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:54:34.841018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="144.146µs"
	I0314 18:54:34.855480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="38.644µs"
	I0314 18:54:36.178382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="5.084713ms"
	I0314 18:54:36.179322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="33.817µs"
	I0314 18:54:36.413602       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-hgm7c" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-hgm7c"
	I0314 18:54:54.217150       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:54:56.417056       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-669543-m03 event: Removing Node multinode-669543-m03 from Controller"
	I0314 18:54:56.570204       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:54:56.571142       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-669543-m03\" does not exist"
	I0314 18:54:56.603000       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-669543-m03" podCIDRs=["10.244.2.0/24"]
	I0314 18:55:01.418598       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-669543-m03 event: Registered Node multinode-669543-m03 in Controller"
	I0314 18:55:01.899243       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	
	
	==> kube-controller-manager [b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9] <==
	I0314 18:49:02.705291       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:49:02.705403       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-669543-m03\" does not exist"
	I0314 18:49:02.733807       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xcn2w"
	I0314 18:49:02.733860       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vjshs"
	I0314 18:49:02.740557       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-669543-m03" podCIDRs=["10.244.2.0/24"]
	I0314 18:49:02.949852       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-669543-m03"
	I0314 18:49:02.950153       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-669543-m03 event: Registered Node multinode-669543-m03 in Controller"
	I0314 18:49:09.848898       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:49:41.447980       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:49:42.974586       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-669543-m03 event: Removing Node multinode-669543-m03 from Controller"
	I0314 18:49:44.120329       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-669543-m03\" does not exist"
	I0314 18:49:44.124836       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:49:44.147091       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-669543-m03" podCIDRs=["10.244.3.0/24"]
	I0314 18:49:47.975765       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-669543-m03 event: Registered Node multinode-669543-m03 in Controller"
	I0314 18:49:50.725132       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:50:33.008824       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-669543-m03 status is now: NodeNotReady"
	I0314 18:50:33.008896       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:50:33.013630       1 event.go:307] "Event occurred" object="multinode-669543-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-669543-m02 status is now: NodeNotReady"
	I0314 18:50:33.023667       1 event.go:307] "Event occurred" object="kube-system/kindnet-xcn2w" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:50:33.026247       1 event.go:307] "Event occurred" object="kube-system/kindnet-fjd7q" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:50:33.037234       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vjshs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:50:33.043673       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-r4pb9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:50:33.060886       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-nslm6" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:50:33.072773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="12.02508ms"
	I0314 18:50:33.073223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="85.502µs"
	
	
	==> kube-proxy [1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434] <==
	I0314 18:47:40.337555       1 server_others.go:69] "Using iptables proxy"
	I0314 18:47:40.357136       1 node.go:141] Successfully retrieved node IP: 192.168.39.68
	I0314 18:47:40.425312       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:47:40.425331       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:47:40.428840       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:47:40.429363       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:47:40.429532       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:47:40.429662       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:47:40.434626       1 config.go:188] "Starting service config controller"
	I0314 18:47:40.434677       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:47:40.434712       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:47:40.434728       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:47:40.435404       1 config.go:315] "Starting node config controller"
	I0314 18:47:40.435440       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:47:40.535470       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 18:47:40.535521       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:47:40.535530       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [97671595afeadf32c4ab5578309c623b4f31ff51c80296e5b4bc066b46d80517] <==
	I0314 18:53:44.853685       1 server_others.go:69] "Using iptables proxy"
	I0314 18:53:44.889462       1 node.go:141] Successfully retrieved node IP: 192.168.39.68
	I0314 18:53:45.020189       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:53:45.020213       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:53:45.029023       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:53:45.029079       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:53:45.029303       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:53:45.029314       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:53:45.030769       1 config.go:188] "Starting service config controller"
	I0314 18:53:45.030777       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:53:45.030805       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:53:45.030808       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:53:45.031221       1 config.go:315] "Starting node config controller"
	I0314 18:53:45.031227       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:53:45.131252       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:53:45.131549       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:53:45.131765       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2] <==
	E0314 18:47:22.430869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 18:47:22.430900       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 18:47:22.431296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 18:47:22.431120       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 18:47:22.431409       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:47:23.337394       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 18:47:23.337450       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 18:47:23.402062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 18:47:23.402118       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 18:47:23.417805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 18:47:23.418232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 18:47:23.422148       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 18:47:23.422239       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 18:47:23.450690       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 18:47:23.450760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 18:47:23.523831       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 18:47:23.524092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 18:47:23.558399       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 18:47:23.558559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:47:23.623673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 18:47:23.623811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 18:47:25.706218       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 18:51:57.228640       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0314 18:51:57.228887       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0314 18:51:57.231778       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dabf1dd85b17a993d661d0ddae99290aa9dcfa0228580d94765756c4f67e522e] <==
	I0314 18:53:41.329066       1 serving.go:348] Generated self-signed cert in-memory
	I0314 18:53:43.267292       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 18:53:43.267394       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:53:43.278281       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 18:53:43.279150       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 18:53:43.279195       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0314 18:53:43.279295       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 18:53:43.293238       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 18:53:43.279304       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0314 18:53:43.293292       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0314 18:53:43.294557       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0314 18:53:43.394528       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 18:53:43.394528       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0314 18:53:43.395047       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Mar 14 18:53:43 multinode-669543 kubelet[3088]: I0314 18:53:43.823616    3088 topology_manager.go:215] "Topology Admit Handler" podUID="36fffd3c-245b-4633-96d8-3c1fc216830c" podNamespace="kube-system" podName="coredns-5dd5756b68-z2ssg"
	Mar 14 18:53:43 multinode-669543 kubelet[3088]: I0314 18:53:43.823779    3088 topology_manager.go:215] "Topology Admit Handler" podUID="ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6" podNamespace="kube-system" podName="kube-proxy-gv9z7"
	Mar 14 18:53:43 multinode-669543 kubelet[3088]: I0314 18:53:43.823893    3088 topology_manager.go:215] "Topology Admit Handler" podUID="a06c8b3d-ca1e-491c-bb77-ce60da9c5f96" podNamespace="kube-system" podName="storage-provisioner"
	Mar 14 18:53:43 multinode-669543 kubelet[3088]: I0314 18:53:43.824312    3088 topology_manager.go:215] "Topology Admit Handler" podUID="2e875679-7a0f-4c0e-bb89-61d1d25322b9" podNamespace="default" podName="busybox-5b5d89c9d6-wdd4q"
	Mar 14 18:53:43 multinode-669543 kubelet[3088]: I0314 18:53:43.826822    3088 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 14 18:53:43 multinode-669543 kubelet[3088]: I0314 18:53:43.876895    3088 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-cfg\" (UniqueName: \"kubernetes.io/host-path/d74393fb-ce21-43b4-9400-960184cbe665-cni-cfg\") pod \"kindnet-j8rsz\" (UID: \"d74393fb-ce21-43b4-9400-960184cbe665\") " pod="kube-system/kindnet-j8rsz"
	Mar 14 18:53:43 multinode-669543 kubelet[3088]: I0314 18:53:43.877092    3088 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/d74393fb-ce21-43b4-9400-960184cbe665-xtables-lock\") pod \"kindnet-j8rsz\" (UID: \"d74393fb-ce21-43b4-9400-960184cbe665\") " pod="kube-system/kindnet-j8rsz"
	Mar 14 18:53:43 multinode-669543 kubelet[3088]: I0314 18:53:43.877114    3088 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/d74393fb-ce21-43b4-9400-960184cbe665-lib-modules\") pod \"kindnet-j8rsz\" (UID: \"d74393fb-ce21-43b4-9400-960184cbe665\") " pod="kube-system/kindnet-j8rsz"
	Mar 14 18:53:43 multinode-669543 kubelet[3088]: I0314 18:53:43.877157    3088 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6-lib-modules\") pod \"kube-proxy-gv9z7\" (UID: \"ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6\") " pod="kube-system/kube-proxy-gv9z7"
	Mar 14 18:53:43 multinode-669543 kubelet[3088]: I0314 18:53:43.877231    3088 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6-xtables-lock\") pod \"kube-proxy-gv9z7\" (UID: \"ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6\") " pod="kube-system/kube-proxy-gv9z7"
	Mar 14 18:53:43 multinode-669543 kubelet[3088]: I0314 18:53:43.877255    3088 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/a06c8b3d-ca1e-491c-bb77-ce60da9c5f96-tmp\") pod \"storage-provisioner\" (UID: \"a06c8b3d-ca1e-491c-bb77-ce60da9c5f96\") " pod="kube-system/storage-provisioner"
	Mar 14 18:54:38 multinode-669543 kubelet[3088]: E0314 18:54:38.936901    3088 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:54:38 multinode-669543 kubelet[3088]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:54:38 multinode-669543 kubelet[3088]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:54:38 multinode-669543 kubelet[3088]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:54:38 multinode-669543 kubelet[3088]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:54:38 multinode-669543 kubelet[3088]: E0314 18:54:38.974419    3088 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podba3b7b3c-c295-4e3f-98f3-66278b5bf7d6/crio-a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0: Error finding container a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0: Status 404 returned error can't find the container with id a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0
	Mar 14 18:54:38 multinode-669543 kubelet[3088]: E0314 18:54:38.975266    3088 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod2e875679-7a0f-4c0e-bb89-61d1d25322b9/crio-5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88: Error finding container 5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88: Status 404 returned error can't find the container with id 5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88
	Mar 14 18:54:38 multinode-669543 kubelet[3088]: E0314 18:54:38.975597    3088 manager.go:1106] Failed to create existing container: /kubepods/podd74393fb-ce21-43b4-9400-960184cbe665/crio-9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4: Error finding container 9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4: Status 404 returned error can't find the container with id 9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4
	Mar 14 18:54:38 multinode-669543 kubelet[3088]: E0314 18:54:38.976120    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/podacac00fe4893897b1bd8f4bb5003ed66/crio-375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03: Error finding container 375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03: Status 404 returned error can't find the container with id 375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03
	Mar 14 18:54:38 multinode-669543 kubelet[3088]: E0314 18:54:38.976533    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/podcd1dfb67d1de36699e8c1e198b392ffb/crio-6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6: Error finding container 6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6: Status 404 returned error can't find the container with id 6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6
	Mar 14 18:54:38 multinode-669543 kubelet[3088]: E0314 18:54:38.977780    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod5a0929f2d9c353ee576b697cb4a8fdc9/crio-e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84: Error finding container e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84: Status 404 returned error can't find the container with id e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84
	Mar 14 18:54:38 multinode-669543 kubelet[3088]: E0314 18:54:38.978471    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod36fffd3c-245b-4633-96d8-3c1fc216830c/crio-b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b: Error finding container b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b: Status 404 returned error can't find the container with id b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b
	Mar 14 18:54:38 multinode-669543 kubelet[3088]: E0314 18:54:38.979009    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod9bfe873d8c178b7d6aa8ac90faa3a096/crio-c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838: Error finding container c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838: Status 404 returned error can't find the container with id c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838
	Mar 14 18:54:38 multinode-669543 kubelet[3088]: E0314 18:54:38.979225    3088 manager.go:1106] Failed to create existing container: /kubepods/besteffort/poda06c8b3d-ca1e-491c-bb77-ce60da9c5f96/crio-c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9: Error finding container c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9: Status 404 returned error can't find the container with id c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 18:55:04.702851  977257 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18384-942544/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-669543 -n multinode-669543
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-669543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/RestartKeepsNodes FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/RestartKeepsNodes (313.20s)

                                                
                                    
x
+
TestMultiNode/serial/StopMultiNode (141.5s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopMultiNode
multinode_test.go:345: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 stop
E0314 18:55:17.576581  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
multinode_test.go:345: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-669543 stop: exit status 82 (2m0.494535389s)

                                                
                                                
-- stdout --
	* Stopping node "multinode-669543-m02"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:347: failed to stop cluster. args "out/minikube-linux-amd64 -p multinode-669543 stop": exit status 82
multinode_test.go:351: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 status
E0314 18:57:14.528597  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:57:14.854063  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
multinode_test.go:351: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-669543 status: exit status 3 (18.666907097s)

                                                
                                                
-- stdout --
	multinode-669543
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-669543-m02
	type: Worker
	host: Error
	kubelet: Nonexistent
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 18:57:28.248637  977798 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host
	E0314 18:57:28.248710  977798 status.go:260] status error: NewSession: new client: new client: dial tcp 192.168.39.16:22: connect: no route to host

                                                
                                                
** /stderr **
multinode_test.go:354: failed to run minikube status. args "out/minikube-linux-amd64 -p multinode-669543 status" : exit status 3
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p multinode-669543 -n multinode-669543
helpers_test.go:244: <<< TestMultiNode/serial/StopMultiNode FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestMultiNode/serial/StopMultiNode]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p multinode-669543 logs -n 25: (1.634406222s)
helpers_test.go:252: TestMultiNode/serial/StopMultiNode logs: 
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| Command |                                          Args                                           |     Profile      |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp multinode-669543-m02:/home/docker/cp-test.txt                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543:/home/docker/cp-test_multinode-669543-m02_multinode-669543.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n multinode-669543 sudo cat                                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | /home/docker/cp-test_multinode-669543-m02_multinode-669543.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp multinode-669543-m02:/home/docker/cp-test.txt                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03:/home/docker/cp-test_multinode-669543-m02_multinode-669543-m03.txt |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m02 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n multinode-669543-m03 sudo cat                                   | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | /home/docker/cp-test_multinode-669543-m02_multinode-669543-m03.txt                      |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp testdata/cp-test.txt                                                | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03:/home/docker/cp-test.txt                                           |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp multinode-669543-m03:/home/docker/cp-test.txt                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | /tmp/TestMultiNodeserialCopyFile1246053536/001/cp-test_multinode-669543-m03.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp multinode-669543-m03:/home/docker/cp-test.txt                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543:/home/docker/cp-test_multinode-669543-m03_multinode-669543.txt         |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n multinode-669543 sudo cat                                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | /home/docker/cp-test_multinode-669543-m03_multinode-669543.txt                          |                  |         |         |                     |                     |
	| cp      | multinode-669543 cp multinode-669543-m03:/home/docker/cp-test.txt                       | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m02:/home/docker/cp-test_multinode-669543-m03_multinode-669543-m02.txt |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n                                                                 | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | multinode-669543-m03 sudo cat                                                           |                  |         |         |                     |                     |
	|         | /home/docker/cp-test.txt                                                                |                  |         |         |                     |                     |
	| ssh     | multinode-669543 ssh -n multinode-669543-m02 sudo cat                                   | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | /home/docker/cp-test_multinode-669543-m03_multinode-669543-m02.txt                      |                  |         |         |                     |                     |
	| node    | multinode-669543 node stop m03                                                          | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	| node    | multinode-669543 node start                                                             | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC | 14 Mar 24 18:49 UTC |
	|         | m03 -v=7 --alsologtostderr                                                              |                  |         |         |                     |                     |
	| node    | list -p multinode-669543                                                                | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC |                     |
	| stop    | -p multinode-669543                                                                     | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:49 UTC |                     |
	| start   | -p multinode-669543                                                                     | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:51 UTC | 14 Mar 24 18:55 UTC |
	|         | --wait=true -v=8                                                                        |                  |         |         |                     |                     |
	|         | --alsologtostderr                                                                       |                  |         |         |                     |                     |
	| node    | list -p multinode-669543                                                                | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:55 UTC |                     |
	| node    | multinode-669543 node delete                                                            | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:55 UTC | 14 Mar 24 18:55 UTC |
	|         | m03                                                                                     |                  |         |         |                     |                     |
	| stop    | multinode-669543 stop                                                                   | multinode-669543 | jenkins | v1.32.0 | 14 Mar 24 18:55 UTC |                     |
	|---------|-----------------------------------------------------------------------------------------|------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:51:56
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:51:56.337008  976444 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:51:56.337282  976444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:51:56.337290  976444 out.go:304] Setting ErrFile to fd 2...
	I0314 18:51:56.337295  976444 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:51:56.337464  976444 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:51:56.338072  976444 out.go:298] Setting JSON to false
	I0314 18:51:56.338950  976444 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":95668,"bootTime":1710346648,"procs":186,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:51:56.339014  976444 start.go:139] virtualization: kvm guest
	I0314 18:51:56.341262  976444 out.go:177] * [multinode-669543] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:51:56.342487  976444 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:51:56.342514  976444 notify.go:220] Checking for updates...
	I0314 18:51:56.343807  976444 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:51:56.345213  976444 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:51:56.346371  976444 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:51:56.347557  976444 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:51:56.348749  976444 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:51:56.350442  976444 config.go:182] Loaded profile config "multinode-669543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:51:56.350549  976444 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:51:56.351050  976444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:51:56.351122  976444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:51:56.366435  976444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41319
	I0314 18:51:56.366900  976444 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:51:56.367630  976444 main.go:141] libmachine: Using API Version  1
	I0314 18:51:56.367667  976444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:51:56.368070  976444 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:51:56.368302  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:51:56.403528  976444 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 18:51:56.404769  976444 start.go:297] selected driver: kvm2
	I0314 18:51:56.404783  976444 start.go:901] validating driver "kvm2" against &{Name:multinode-669543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubern
etesVersion:v1.28.4 ClusterName:multinode-669543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime: ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingres
s-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOpt
imizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:51:56.404906  976444 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:51:56.405206  976444 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:51:56.405266  976444 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 18:51:56.420727  976444 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 18:51:56.421337  976444 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 18:51:56.421403  976444 cni.go:84] Creating CNI manager for ""
	I0314 18:51:56.421415  976444 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 18:51:56.421465  976444 start.go:340] cluster config:
	{Name:multinode-669543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:multinode-669543 Namespace:default APIServerHA
VIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kon
g:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetC
lientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:51:56.421605  976444 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:51:56.427094  976444 out.go:177] * Starting "multinode-669543" primary control-plane node in "multinode-669543" cluster
	I0314 18:51:56.433231  976444 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:51:56.433283  976444 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 18:51:56.433306  976444 cache.go:56] Caching tarball of preloaded images
	I0314 18:51:56.433417  976444 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 18:51:56.433432  976444 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 18:51:56.433551  976444 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/config.json ...
	I0314 18:51:56.433776  976444 start.go:360] acquireMachinesLock for multinode-669543: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 18:51:56.433836  976444 start.go:364] duration metric: took 29.397µs to acquireMachinesLock for "multinode-669543"
	I0314 18:51:56.433855  976444 start.go:96] Skipping create...Using existing machine configuration
	I0314 18:51:56.433861  976444 fix.go:54] fixHost starting: 
	I0314 18:51:56.434162  976444 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:51:56.434198  976444 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:51:56.448838  976444 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42113
	I0314 18:51:56.449295  976444 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:51:56.449802  976444 main.go:141] libmachine: Using API Version  1
	I0314 18:51:56.449853  976444 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:51:56.450179  976444 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:51:56.450403  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:51:56.450599  976444 main.go:141] libmachine: (multinode-669543) Calling .GetState
	I0314 18:51:56.452159  976444 fix.go:112] recreateIfNeeded on multinode-669543: state=Running err=<nil>
	W0314 18:51:56.452179  976444 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 18:51:56.454041  976444 out.go:177] * Updating the running kvm2 "multinode-669543" VM ...
	I0314 18:51:56.455190  976444 machine.go:94] provisionDockerMachine start ...
	I0314 18:51:56.455211  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:51:56.455438  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:56.458160  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.458763  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.458804  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.458903  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:51:56.459096  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.459272  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.459404  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:51:56.459581  976444 main.go:141] libmachine: Using SSH client type: native
	I0314 18:51:56.459791  976444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0314 18:51:56.459803  976444 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 18:51:56.566043  976444 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-669543
	
	I0314 18:51:56.566077  976444 main.go:141] libmachine: (multinode-669543) Calling .GetMachineName
	I0314 18:51:56.566347  976444 buildroot.go:166] provisioning hostname "multinode-669543"
	I0314 18:51:56.566361  976444 main.go:141] libmachine: (multinode-669543) Calling .GetMachineName
	I0314 18:51:56.566555  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:56.569634  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.570048  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.570086  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.570156  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:51:56.570323  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.570479  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.570631  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:51:56.570807  976444 main.go:141] libmachine: Using SSH client type: native
	I0314 18:51:56.571038  976444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0314 18:51:56.571057  976444 main.go:141] libmachine: About to run SSH command:
	sudo hostname multinode-669543 && echo "multinode-669543" | sudo tee /etc/hostname
	I0314 18:51:56.696244  976444 main.go:141] libmachine: SSH cmd err, output: <nil>: multinode-669543
	
	I0314 18:51:56.696273  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:56.699301  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.699733  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.699761  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.699971  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:51:56.700274  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.700458  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.700662  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:51:56.700856  976444 main.go:141] libmachine: Using SSH client type: native
	I0314 18:51:56.701077  976444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0314 18:51:56.701095  976444 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\smultinode-669543' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 multinode-669543/g' /etc/hosts;
				else 
					echo '127.0.1.1 multinode-669543' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 18:51:56.805651  976444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 18:51:56.805699  976444 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 18:51:56.805736  976444 buildroot.go:174] setting up certificates
	I0314 18:51:56.805750  976444 provision.go:84] configureAuth start
	I0314 18:51:56.805768  976444 main.go:141] libmachine: (multinode-669543) Calling .GetMachineName
	I0314 18:51:56.806086  976444 main.go:141] libmachine: (multinode-669543) Calling .GetIP
	I0314 18:51:56.808899  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.809311  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.809345  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.809555  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:56.812058  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.812486  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.812528  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.812693  976444 provision.go:143] copyHostCerts
	I0314 18:51:56.812729  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:51:56.812782  976444 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 18:51:56.812806  976444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 18:51:56.812877  976444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 18:51:56.813040  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:51:56.813081  976444 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 18:51:56.813091  976444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 18:51:56.813147  976444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 18:51:56.813211  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem -> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:51:56.813228  976444 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 18:51:56.813236  976444 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 18:51:56.813259  976444 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 18:51:56.813327  976444 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.multinode-669543 san=[127.0.0.1 192.168.39.68 localhost minikube multinode-669543]
	I0314 18:51:56.897036  976444 provision.go:177] copyRemoteCerts
	I0314 18:51:56.897094  976444 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 18:51:56.897116  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:56.900099  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.900505  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:56.900533  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:56.900717  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:51:56.900933  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:56.901080  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:51:56.901277  976444 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/multinode-669543/id_rsa Username:docker}
	I0314 18:51:56.988952  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem -> /etc/docker/ca.pem
	I0314 18:51:56.989036  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 18:51:57.019036  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem -> /etc/docker/server.pem
	I0314 18:51:57.019094  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1216 bytes)
	I0314 18:51:57.046423  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem -> /etc/docker/server-key.pem
	I0314 18:51:57.046476  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 18:51:57.076249  976444 provision.go:87] duration metric: took 270.480727ms to configureAuth
	I0314 18:51:57.076281  976444 buildroot.go:189] setting minikube options for container-runtime
	I0314 18:51:57.076518  976444 config.go:182] Loaded profile config "multinode-669543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:51:57.076613  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:51:57.079411  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:57.079961  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:51:57.079985  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:51:57.080176  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:51:57.080395  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:57.080574  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:51:57.080785  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:51:57.081033  976444 main.go:141] libmachine: Using SSH client type: native
	I0314 18:51:57.081205  976444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0314 18:51:57.081220  976444 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 18:53:27.960112  976444 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 18:53:27.960154  976444 machine.go:97] duration metric: took 1m31.504947384s to provisionDockerMachine
	I0314 18:53:27.960172  976444 start.go:293] postStartSetup for "multinode-669543" (driver="kvm2")
	I0314 18:53:27.960188  976444 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 18:53:27.960249  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:53:27.960674  976444 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 18:53:27.960708  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:53:27.964281  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:27.964955  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:27.965000  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:27.965213  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:53:27.965416  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:53:27.965598  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:53:27.965760  976444 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/multinode-669543/id_rsa Username:docker}
	I0314 18:53:28.048828  976444 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 18:53:28.053457  976444 command_runner.go:130] > NAME=Buildroot
	I0314 18:53:28.053478  976444 command_runner.go:130] > VERSION=2023.02.9-dirty
	I0314 18:53:28.053484  976444 command_runner.go:130] > ID=buildroot
	I0314 18:53:28.053488  976444 command_runner.go:130] > VERSION_ID=2023.02.9
	I0314 18:53:28.053493  976444 command_runner.go:130] > PRETTY_NAME="Buildroot 2023.02.9"
	I0314 18:53:28.053531  976444 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 18:53:28.053550  976444 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 18:53:28.053619  976444 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 18:53:28.053732  976444 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 18:53:28.053746  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /etc/ssl/certs/9513112.pem
	I0314 18:53:28.053860  976444 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 18:53:28.063797  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:53:28.090047  976444 start.go:296] duration metric: took 129.859698ms for postStartSetup
	I0314 18:53:28.090087  976444 fix.go:56] duration metric: took 1m31.656225159s for fixHost
	I0314 18:53:28.090113  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:53:28.092660  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.092999  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:28.093033  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.093164  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:53:28.093361  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:53:28.093544  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:53:28.093728  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:53:28.093897  976444 main.go:141] libmachine: Using SSH client type: native
	I0314 18:53:28.094075  976444 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.68 22 <nil> <nil>}
	I0314 18:53:28.094090  976444 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 18:53:28.197491  976444 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710442408.169414138
	
	I0314 18:53:28.197528  976444 fix.go:216] guest clock: 1710442408.169414138
	I0314 18:53:28.197539  976444 fix.go:229] Guest: 2024-03-14 18:53:28.169414138 +0000 UTC Remote: 2024-03-14 18:53:28.090091744 +0000 UTC m=+91.804740506 (delta=79.322394ms)
	I0314 18:53:28.197562  976444 fix.go:200] guest clock delta is within tolerance: 79.322394ms
	I0314 18:53:28.197567  976444 start.go:83] releasing machines lock for "multinode-669543", held for 1m31.76371898s
	I0314 18:53:28.197588  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:53:28.197868  976444 main.go:141] libmachine: (multinode-669543) Calling .GetIP
	I0314 18:53:28.200520  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.200957  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:28.200987  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.201102  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:53:28.201766  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:53:28.201943  976444 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:53:28.202053  976444 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 18:53:28.202112  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:53:28.202188  976444 ssh_runner.go:195] Run: cat /version.json
	I0314 18:53:28.202213  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:53:28.204807  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.205144  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:28.205193  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.205217  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.205434  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:53:28.205608  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:53:28.205722  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:28.205759  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:28.205779  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:53:28.205872  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:53:28.205940  976444 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/multinode-669543/id_rsa Username:docker}
	I0314 18:53:28.205992  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:53:28.206126  976444 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:53:28.206247  976444 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/multinode-669543/id_rsa Username:docker}
	I0314 18:53:28.281887  976444 command_runner.go:130] > {"iso_version": "v1.32.1-1710348681-18375", "kicbase_version": "v0.0.42-1710284843-18375", "minikube_version": "v1.32.0", "commit": "fd5757a6603390a2c0efe3b1e5cdd797538203fd"}
	I0314 18:53:28.282055  976444 ssh_runner.go:195] Run: systemctl --version
	I0314 18:53:28.309346  976444 command_runner.go:130] > <a href="https://github.com/kubernetes/registry.k8s.io">Temporary Redirect</a>.
	I0314 18:53:28.309397  976444 command_runner.go:130] > systemd 252 (252)
	I0314 18:53:28.309435  976444 command_runner.go:130] > -PAM -AUDIT -SELINUX -APPARMOR -IMA -SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL +ACL +BLKID +CURL -ELFUTILS -FIDO2 -IDN2 -IDN +IPTC +KMOD -LIBCRYPTSETUP +LIBFDISK -PCRE2 -PWQUALITY -P11KIT -QRENCODE -TPM2 -BZIP2 +LZ4 +XZ +ZLIB -ZSTD -BPF_FRAMEWORK -XKBCOMMON -UTMP -SYSVINIT default-hierarchy=unified
	I0314 18:53:28.309515  976444 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 18:53:28.475164  976444 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	I0314 18:53:28.483185  976444 command_runner.go:130] ! stat: cannot statx '/etc/cni/net.d/*loopback.conf*': No such file or directory
	W0314 18:53:28.483638  976444 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 18:53:28.483707  976444 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 18:53:28.493953  976444 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 18:53:28.493973  976444 start.go:494] detecting cgroup driver to use...
	I0314 18:53:28.494019  976444 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 18:53:28.511167  976444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 18:53:28.525698  976444 docker.go:217] disabling cri-docker service (if available) ...
	I0314 18:53:28.525735  976444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 18:53:28.539532  976444 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 18:53:28.553912  976444 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 18:53:28.707562  976444 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 18:53:28.865944  976444 docker.go:233] disabling docker service ...
	I0314 18:53:28.866020  976444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 18:53:28.886291  976444 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 18:53:28.901462  976444 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 18:53:29.049987  976444 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 18:53:29.194314  976444 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 18:53:29.211025  976444 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 18:53:29.232616  976444 command_runner.go:130] > runtime-endpoint: unix:///var/run/crio/crio.sock
	I0314 18:53:29.232655  976444 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 18:53:29.232707  976444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:53:29.246809  976444 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 18:53:29.246880  976444 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:53:29.259639  976444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:53:29.271880  976444 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 18:53:29.283695  976444 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 18:53:29.295810  976444 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 18:53:29.306418  976444 command_runner.go:130] > net.bridge.bridge-nf-call-iptables = 1
	I0314 18:53:29.306633  976444 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 18:53:29.317772  976444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:53:29.460396  976444 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 18:53:36.354342  976444 ssh_runner.go:235] Completed: sudo systemctl restart crio: (6.893907209s)
	I0314 18:53:36.354379  976444 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 18:53:36.354440  976444 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 18:53:36.360026  976444 command_runner.go:130] >   File: /var/run/crio/crio.sock
	I0314 18:53:36.360054  976444 command_runner.go:130] >   Size: 0         	Blocks: 0          IO Block: 4096   socket
	I0314 18:53:36.360063  976444 command_runner.go:130] > Device: 0,22	Inode: 1344        Links: 1
	I0314 18:53:36.360082  976444 command_runner.go:130] > Access: (0660/srw-rw----)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 18:53:36.360094  976444 command_runner.go:130] > Access: 2024-03-14 18:53:36.218361868 +0000
	I0314 18:53:36.360105  976444 command_runner.go:130] > Modify: 2024-03-14 18:53:36.218361868 +0000
	I0314 18:53:36.360115  976444 command_runner.go:130] > Change: 2024-03-14 18:53:36.218361868 +0000
	I0314 18:53:36.360124  976444 command_runner.go:130] >  Birth: -
	I0314 18:53:36.360233  976444 start.go:562] Will wait 60s for crictl version
	I0314 18:53:36.360286  976444 ssh_runner.go:195] Run: which crictl
	I0314 18:53:36.364946  976444 command_runner.go:130] > /usr/bin/crictl
	I0314 18:53:36.365086  976444 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 18:53:36.407205  976444 command_runner.go:130] > Version:  0.1.0
	I0314 18:53:36.407229  976444 command_runner.go:130] > RuntimeName:  cri-o
	I0314 18:53:36.407234  976444 command_runner.go:130] > RuntimeVersion:  1.29.1
	I0314 18:53:36.407239  976444 command_runner.go:130] > RuntimeApiVersion:  v1
	I0314 18:53:36.407506  976444 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 18:53:36.407614  976444 ssh_runner.go:195] Run: crio --version
	I0314 18:53:36.439747  976444 command_runner.go:130] > crio version 1.29.1
	I0314 18:53:36.439768  976444 command_runner.go:130] > Version:        1.29.1
	I0314 18:53:36.439776  976444 command_runner.go:130] > GitCommit:      unknown
	I0314 18:53:36.439782  976444 command_runner.go:130] > GitCommitDate:  unknown
	I0314 18:53:36.439788  976444 command_runner.go:130] > GitTreeState:   clean
	I0314 18:53:36.439798  976444 command_runner.go:130] > BuildDate:      2024-03-13T22:45:41Z
	I0314 18:53:36.439805  976444 command_runner.go:130] > GoVersion:      go1.21.6
	I0314 18:53:36.439810  976444 command_runner.go:130] > Compiler:       gc
	I0314 18:53:36.439815  976444 command_runner.go:130] > Platform:       linux/amd64
	I0314 18:53:36.439819  976444 command_runner.go:130] > Linkmode:       dynamic
	I0314 18:53:36.439823  976444 command_runner.go:130] > BuildTags:      
	I0314 18:53:36.439828  976444 command_runner.go:130] >   containers_image_ostree_stub
	I0314 18:53:36.439833  976444 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0314 18:53:36.439836  976444 command_runner.go:130] >   btrfs_noversion
	I0314 18:53:36.439841  976444 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0314 18:53:36.439845  976444 command_runner.go:130] >   libdm_no_deferred_remove
	I0314 18:53:36.439852  976444 command_runner.go:130] >   seccomp
	I0314 18:53:36.439857  976444 command_runner.go:130] > LDFlags:          unknown
	I0314 18:53:36.439860  976444 command_runner.go:130] > SeccompEnabled:   true
	I0314 18:53:36.439867  976444 command_runner.go:130] > AppArmorEnabled:  false
	I0314 18:53:36.441172  976444 ssh_runner.go:195] Run: crio --version
	I0314 18:53:36.473536  976444 command_runner.go:130] > crio version 1.29.1
	I0314 18:53:36.473558  976444 command_runner.go:130] > Version:        1.29.1
	I0314 18:53:36.473563  976444 command_runner.go:130] > GitCommit:      unknown
	I0314 18:53:36.473567  976444 command_runner.go:130] > GitCommitDate:  unknown
	I0314 18:53:36.473571  976444 command_runner.go:130] > GitTreeState:   clean
	I0314 18:53:36.473577  976444 command_runner.go:130] > BuildDate:      2024-03-13T22:45:41Z
	I0314 18:53:36.473581  976444 command_runner.go:130] > GoVersion:      go1.21.6
	I0314 18:53:36.473585  976444 command_runner.go:130] > Compiler:       gc
	I0314 18:53:36.473604  976444 command_runner.go:130] > Platform:       linux/amd64
	I0314 18:53:36.473609  976444 command_runner.go:130] > Linkmode:       dynamic
	I0314 18:53:36.473617  976444 command_runner.go:130] > BuildTags:      
	I0314 18:53:36.473621  976444 command_runner.go:130] >   containers_image_ostree_stub
	I0314 18:53:36.473625  976444 command_runner.go:130] >   exclude_graphdriver_btrfs
	I0314 18:53:36.473629  976444 command_runner.go:130] >   btrfs_noversion
	I0314 18:53:36.473633  976444 command_runner.go:130] >   exclude_graphdriver_devicemapper
	I0314 18:53:36.473637  976444 command_runner.go:130] >   libdm_no_deferred_remove
	I0314 18:53:36.473641  976444 command_runner.go:130] >   seccomp
	I0314 18:53:36.473645  976444 command_runner.go:130] > LDFlags:          unknown
	I0314 18:53:36.473649  976444 command_runner.go:130] > SeccompEnabled:   true
	I0314 18:53:36.473653  976444 command_runner.go:130] > AppArmorEnabled:  false
	I0314 18:53:36.481572  976444 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 18:53:36.483061  976444 main.go:141] libmachine: (multinode-669543) Calling .GetIP
	I0314 18:53:36.485935  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:36.486331  976444 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:53:36.486366  976444 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:53:36.486526  976444 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 18:53:36.491354  976444 command_runner.go:130] > 192.168.39.1	host.minikube.internal
	I0314 18:53:36.491512  976444 kubeadm.go:877] updating cluster {Name:multinode-669543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.
28.4 ClusterName:multinode-669543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false
inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:
false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 18:53:36.491670  976444 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 18:53:36.491782  976444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:53:36.547502  976444 command_runner.go:130] > {
	I0314 18:53:36.547528  976444 command_runner.go:130] >   "images": [
	I0314 18:53:36.547536  976444 command_runner.go:130] >     {
	I0314 18:53:36.547548  976444 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0314 18:53:36.547554  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.547562  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0314 18:53:36.547568  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547573  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.547585  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0314 18:53:36.547598  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0314 18:53:36.547607  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547618  976444 command_runner.go:130] >       "size": "65258016",
	I0314 18:53:36.547625  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.547633  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.547642  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.547650  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.547655  976444 command_runner.go:130] >     },
	I0314 18:53:36.547662  976444 command_runner.go:130] >     {
	I0314 18:53:36.547672  976444 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0314 18:53:36.547685  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.547694  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0314 18:53:36.547700  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547707  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.547720  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0314 18:53:36.547733  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0314 18:53:36.547749  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547760  976444 command_runner.go:130] >       "size": "65291810",
	I0314 18:53:36.547767  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.547783  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.547793  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.547800  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.547807  976444 command_runner.go:130] >     },
	I0314 18:53:36.547812  976444 command_runner.go:130] >     {
	I0314 18:53:36.547823  976444 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0314 18:53:36.547834  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.547843  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0314 18:53:36.547849  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547857  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.547872  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0314 18:53:36.547888  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0314 18:53:36.547896  976444 command_runner.go:130] >       ],
	I0314 18:53:36.547903  976444 command_runner.go:130] >       "size": "1363676",
	I0314 18:53:36.547912  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.547919  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.547929  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.547938  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.547947  976444 command_runner.go:130] >     },
	I0314 18:53:36.547953  976444 command_runner.go:130] >     {
	I0314 18:53:36.547967  976444 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0314 18:53:36.547977  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.547986  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0314 18:53:36.547995  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548002  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548019  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0314 18:53:36.548043  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0314 18:53:36.548053  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548060  976444 command_runner.go:130] >       "size": "31470524",
	I0314 18:53:36.548066  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.548073  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548083  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548092  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548112  976444 command_runner.go:130] >     },
	I0314 18:53:36.548121  976444 command_runner.go:130] >     {
	I0314 18:53:36.548133  976444 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0314 18:53:36.548143  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548152  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0314 18:53:36.548161  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548168  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548181  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0314 18:53:36.548197  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0314 18:53:36.548215  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548223  976444 command_runner.go:130] >       "size": "53621675",
	I0314 18:53:36.548236  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.548244  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548253  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548260  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548268  976444 command_runner.go:130] >     },
	I0314 18:53:36.548275  976444 command_runner.go:130] >     {
	I0314 18:53:36.548288  976444 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0314 18:53:36.548297  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548306  976444 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0314 18:53:36.548314  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548323  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548338  976444 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0314 18:53:36.548353  976444 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0314 18:53:36.548361  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548369  976444 command_runner.go:130] >       "size": "295456551",
	I0314 18:53:36.548378  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.548386  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.548394  976444 command_runner.go:130] >       },
	I0314 18:53:36.548402  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548412  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548419  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548428  976444 command_runner.go:130] >     },
	I0314 18:53:36.548435  976444 command_runner.go:130] >     {
	I0314 18:53:36.548449  976444 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0314 18:53:36.548459  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548484  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0314 18:53:36.548494  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548500  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548514  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0314 18:53:36.548529  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0314 18:53:36.548539  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548546  976444 command_runner.go:130] >       "size": "127226832",
	I0314 18:53:36.548556  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.548565  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.548572  976444 command_runner.go:130] >       },
	I0314 18:53:36.548581  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548587  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548595  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548600  976444 command_runner.go:130] >     },
	I0314 18:53:36.548607  976444 command_runner.go:130] >     {
	I0314 18:53:36.548620  976444 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0314 18:53:36.548631  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548642  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0314 18:53:36.548649  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548658  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548692  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0314 18:53:36.548709  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0314 18:53:36.548716  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548723  976444 command_runner.go:130] >       "size": "123261750",
	I0314 18:53:36.548733  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.548740  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.548749  976444 command_runner.go:130] >       },
	I0314 18:53:36.548756  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548766  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548773  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548783  976444 command_runner.go:130] >     },
	I0314 18:53:36.548790  976444 command_runner.go:130] >     {
	I0314 18:53:36.548802  976444 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0314 18:53:36.548812  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548818  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0314 18:53:36.548823  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548834  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548844  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0314 18:53:36.548854  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0314 18:53:36.548858  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548865  976444 command_runner.go:130] >       "size": "74749335",
	I0314 18:53:36.548872  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.548877  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.548883  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.548889  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.548895  976444 command_runner.go:130] >     },
	I0314 18:53:36.548900  976444 command_runner.go:130] >     {
	I0314 18:53:36.548909  976444 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0314 18:53:36.548915  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.548922  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0314 18:53:36.548928  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548934  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.548944  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0314 18:53:36.548957  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0314 18:53:36.548966  976444 command_runner.go:130] >       ],
	I0314 18:53:36.548974  976444 command_runner.go:130] >       "size": "61551410",
	I0314 18:53:36.548983  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.548991  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.548999  976444 command_runner.go:130] >       },
	I0314 18:53:36.549006  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.549016  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.549023  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.549032  976444 command_runner.go:130] >     },
	I0314 18:53:36.549038  976444 command_runner.go:130] >     {
	I0314 18:53:36.549051  976444 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0314 18:53:36.549061  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.549072  976444 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0314 18:53:36.549082  976444 command_runner.go:130] >       ],
	I0314 18:53:36.549089  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.549104  976444 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0314 18:53:36.549118  976444 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0314 18:53:36.549127  976444 command_runner.go:130] >       ],
	I0314 18:53:36.549142  976444 command_runner.go:130] >       "size": "750414",
	I0314 18:53:36.549152  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.549159  976444 command_runner.go:130] >         "value": "65535"
	I0314 18:53:36.549167  976444 command_runner.go:130] >       },
	I0314 18:53:36.549175  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.549184  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.549191  976444 command_runner.go:130] >       "pinned": true
	I0314 18:53:36.549197  976444 command_runner.go:130] >     }
	I0314 18:53:36.549203  976444 command_runner.go:130] >   ]
	I0314 18:53:36.549210  976444 command_runner.go:130] > }
	I0314 18:53:36.549411  976444 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:53:36.549424  976444 crio.go:415] Images already preloaded, skipping extraction
	I0314 18:53:36.549476  976444 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 18:53:36.585686  976444 command_runner.go:130] > {
	I0314 18:53:36.585709  976444 command_runner.go:130] >   "images": [
	I0314 18:53:36.585713  976444 command_runner.go:130] >     {
	I0314 18:53:36.585721  976444 command_runner.go:130] >       "id": "c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc",
	I0314 18:53:36.585726  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.585732  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20230809-80a64d96"
	I0314 18:53:36.585735  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585746  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.585759  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052",
	I0314 18:53:36.585776  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"
	I0314 18:53:36.585781  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585785  976444 command_runner.go:130] >       "size": "65258016",
	I0314 18:53:36.585791  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.585795  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.585804  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.585811  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.585814  976444 command_runner.go:130] >     },
	I0314 18:53:36.585818  976444 command_runner.go:130] >     {
	I0314 18:53:36.585830  976444 command_runner.go:130] >       "id": "4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5",
	I0314 18:53:36.585838  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.585846  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd:v20240202-8f1494ea"
	I0314 18:53:36.585855  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585861  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.585873  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988",
	I0314 18:53:36.585886  976444 command_runner.go:130] >         "docker.io/kindest/kindnetd@sha256:bdddbe20c61d325166b48dd517059f5b93c21526eb74c5c80d86cd6d37236bac"
	I0314 18:53:36.585895  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585900  976444 command_runner.go:130] >       "size": "65291810",
	I0314 18:53:36.585910  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.585922  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.585930  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.585938  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.585944  976444 command_runner.go:130] >     },
	I0314 18:53:36.585948  976444 command_runner.go:130] >     {
	I0314 18:53:36.585956  976444 command_runner.go:130] >       "id": "8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a",
	I0314 18:53:36.585961  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.585967  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox:1.28"
	I0314 18:53:36.585973  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585976  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.585983  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335",
	I0314 18:53:36.585991  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/busybox@sha256:9afb80db71730dbb303fe00765cbf34bddbdc6b66e49897fc2e1861967584b12"
	I0314 18:53:36.585994  976444 command_runner.go:130] >       ],
	I0314 18:53:36.585998  976444 command_runner.go:130] >       "size": "1363676",
	I0314 18:53:36.586004  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.586008  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586018  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586024  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586027  976444 command_runner.go:130] >     },
	I0314 18:53:36.586031  976444 command_runner.go:130] >     {
	I0314 18:53:36.586037  976444 command_runner.go:130] >       "id": "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562",
	I0314 18:53:36.586041  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586047  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner:v5"
	I0314 18:53:36.586053  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586057  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586064  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944",
	I0314 18:53:36.586089  976444 command_runner.go:130] >         "gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"
	I0314 18:53:36.586095  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586099  976444 command_runner.go:130] >       "size": "31470524",
	I0314 18:53:36.586103  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.586107  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586111  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586115  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586118  976444 command_runner.go:130] >     },
	I0314 18:53:36.586122  976444 command_runner.go:130] >     {
	I0314 18:53:36.586128  976444 command_runner.go:130] >       "id": "ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc",
	I0314 18:53:36.586135  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586140  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns:v1.10.1"
	I0314 18:53:36.586144  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586149  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586158  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e",
	I0314 18:53:36.586166  976444 command_runner.go:130] >         "registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"
	I0314 18:53:36.586171  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586175  976444 command_runner.go:130] >       "size": "53621675",
	I0314 18:53:36.586179  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.586182  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586186  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586190  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586193  976444 command_runner.go:130] >     },
	I0314 18:53:36.586197  976444 command_runner.go:130] >     {
	I0314 18:53:36.586203  976444 command_runner.go:130] >       "id": "73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9",
	I0314 18:53:36.586209  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586214  976444 command_runner.go:130] >         "registry.k8s.io/etcd:3.5.9-0"
	I0314 18:53:36.586220  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586224  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586230  976444 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15",
	I0314 18:53:36.586239  976444 command_runner.go:130] >         "registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"
	I0314 18:53:36.586243  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586247  976444 command_runner.go:130] >       "size": "295456551",
	I0314 18:53:36.586250  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.586254  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.586257  976444 command_runner.go:130] >       },
	I0314 18:53:36.586266  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586273  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586276  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586279  976444 command_runner.go:130] >     },
	I0314 18:53:36.586282  976444 command_runner.go:130] >     {
	I0314 18:53:36.586288  976444 command_runner.go:130] >       "id": "7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257",
	I0314 18:53:36.586293  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586298  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver:v1.28.4"
	I0314 18:53:36.586304  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586308  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586315  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499",
	I0314 18:53:36.586324  976444 command_runner.go:130] >         "registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"
	I0314 18:53:36.586327  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586331  976444 command_runner.go:130] >       "size": "127226832",
	I0314 18:53:36.586336  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.586340  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.586345  976444 command_runner.go:130] >       },
	I0314 18:53:36.586349  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586355  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586359  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586365  976444 command_runner.go:130] >     },
	I0314 18:53:36.586369  976444 command_runner.go:130] >     {
	I0314 18:53:36.586377  976444 command_runner.go:130] >       "id": "d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591",
	I0314 18:53:36.586383  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586388  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager:v1.28.4"
	I0314 18:53:36.586394  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586398  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586420  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c",
	I0314 18:53:36.586433  976444 command_runner.go:130] >         "registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"
	I0314 18:53:36.586441  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586448  976444 command_runner.go:130] >       "size": "123261750",
	I0314 18:53:36.586452  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.586458  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.586462  976444 command_runner.go:130] >       },
	I0314 18:53:36.586468  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586472  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586482  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586488  976444 command_runner.go:130] >     },
	I0314 18:53:36.586492  976444 command_runner.go:130] >     {
	I0314 18:53:36.586497  976444 command_runner.go:130] >       "id": "83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e",
	I0314 18:53:36.586503  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586508  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy:v1.28.4"
	I0314 18:53:36.586514  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586518  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586528  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e",
	I0314 18:53:36.586537  976444 command_runner.go:130] >         "registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"
	I0314 18:53:36.586542  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586547  976444 command_runner.go:130] >       "size": "74749335",
	I0314 18:53:36.586552  976444 command_runner.go:130] >       "uid": null,
	I0314 18:53:36.586557  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586563  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586566  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586572  976444 command_runner.go:130] >     },
	I0314 18:53:36.586575  976444 command_runner.go:130] >     {
	I0314 18:53:36.586581  976444 command_runner.go:130] >       "id": "e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1",
	I0314 18:53:36.586588  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586592  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler:v1.28.4"
	I0314 18:53:36.586598  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586602  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586612  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba",
	I0314 18:53:36.586622  976444 command_runner.go:130] >         "registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"
	I0314 18:53:36.586627  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586631  976444 command_runner.go:130] >       "size": "61551410",
	I0314 18:53:36.586638  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.586641  976444 command_runner.go:130] >         "value": "0"
	I0314 18:53:36.586647  976444 command_runner.go:130] >       },
	I0314 18:53:36.586651  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586657  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586661  976444 command_runner.go:130] >       "pinned": false
	I0314 18:53:36.586665  976444 command_runner.go:130] >     },
	I0314 18:53:36.586671  976444 command_runner.go:130] >     {
	I0314 18:53:36.586676  976444 command_runner.go:130] >       "id": "e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c",
	I0314 18:53:36.586688  976444 command_runner.go:130] >       "repoTags": [
	I0314 18:53:36.586706  976444 command_runner.go:130] >         "registry.k8s.io/pause:3.9"
	I0314 18:53:36.586711  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586715  976444 command_runner.go:130] >       "repoDigests": [
	I0314 18:53:36.586724  976444 command_runner.go:130] >         "registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097",
	I0314 18:53:36.586733  976444 command_runner.go:130] >         "registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"
	I0314 18:53:36.586741  976444 command_runner.go:130] >       ],
	I0314 18:53:36.586747  976444 command_runner.go:130] >       "size": "750414",
	I0314 18:53:36.586750  976444 command_runner.go:130] >       "uid": {
	I0314 18:53:36.586754  976444 command_runner.go:130] >         "value": "65535"
	I0314 18:53:36.586760  976444 command_runner.go:130] >       },
	I0314 18:53:36.586764  976444 command_runner.go:130] >       "username": "",
	I0314 18:53:36.586770  976444 command_runner.go:130] >       "spec": null,
	I0314 18:53:36.586774  976444 command_runner.go:130] >       "pinned": true
	I0314 18:53:36.586780  976444 command_runner.go:130] >     }
	I0314 18:53:36.586783  976444 command_runner.go:130] >   ]
	I0314 18:53:36.586789  976444 command_runner.go:130] > }
	I0314 18:53:36.586951  976444 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 18:53:36.586964  976444 cache_images.go:84] Images are preloaded, skipping loading
	I0314 18:53:36.586972  976444 kubeadm.go:928] updating node { 192.168.39.68 8443 v1.28.4 crio true true} ...
	I0314 18:53:36.587066  976444 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=multinode-669543 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.68
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:multinode-669543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 18:53:36.587125  976444 ssh_runner.go:195] Run: crio config
	I0314 18:53:36.630942  976444 command_runner.go:130] > # The CRI-O configuration file specifies all of the available configuration
	I0314 18:53:36.630980  976444 command_runner.go:130] > # options and command-line flags for the crio(8) OCI Kubernetes Container Runtime
	I0314 18:53:36.630991  976444 command_runner.go:130] > # daemon, but in a TOML format that can be more easily modified and versioned.
	I0314 18:53:36.630996  976444 command_runner.go:130] > #
	I0314 18:53:36.631006  976444 command_runner.go:130] > # Please refer to crio.conf(5) for details of all configuration options.
	I0314 18:53:36.631016  976444 command_runner.go:130] > # CRI-O supports partial configuration reload during runtime, which can be
	I0314 18:53:36.631025  976444 command_runner.go:130] > # done by sending SIGHUP to the running process. Currently supported options
	I0314 18:53:36.631035  976444 command_runner.go:130] > # are explicitly mentioned with: 'This option supports live configuration
	I0314 18:53:36.631045  976444 command_runner.go:130] > # reload'.
	I0314 18:53:36.631054  976444 command_runner.go:130] > # CRI-O reads its storage defaults from the containers-storage.conf(5) file
	I0314 18:53:36.631067  976444 command_runner.go:130] > # located at /etc/containers/storage.conf. Modify this storage configuration if
	I0314 18:53:36.631076  976444 command_runner.go:130] > # you want to change the system's defaults. If you want to modify storage just
	I0314 18:53:36.631089  976444 command_runner.go:130] > # for CRI-O, you can change the storage configuration options here.
	I0314 18:53:36.631094  976444 command_runner.go:130] > [crio]
	I0314 18:53:36.631103  976444 command_runner.go:130] > # Path to the "root directory". CRI-O stores all of its data, including
	I0314 18:53:36.631114  976444 command_runner.go:130] > # containers images, in this directory.
	I0314 18:53:36.631227  976444 command_runner.go:130] > root = "/var/lib/containers/storage"
	I0314 18:53:36.631270  976444 command_runner.go:130] > # Path to the "run directory". CRI-O stores all of its state in this directory.
	I0314 18:53:36.631422  976444 command_runner.go:130] > runroot = "/var/run/containers/storage"
	I0314 18:53:36.631444  976444 command_runner.go:130] > # Path to the "imagestore". If CRI-O stores all of its images in this directory differently than Root.
	I0314 18:53:36.631717  976444 command_runner.go:130] > # imagestore = ""
	I0314 18:53:36.631730  976444 command_runner.go:130] > # Storage driver used to manage the storage of images and containers. Please
	I0314 18:53:36.631739  976444 command_runner.go:130] > # refer to containers-storage.conf(5) to see all available storage drivers.
	I0314 18:53:36.632204  976444 command_runner.go:130] > storage_driver = "overlay"
	I0314 18:53:36.632243  976444 command_runner.go:130] > # List to pass options to the storage driver. Please refer to
	I0314 18:53:36.632253  976444 command_runner.go:130] > # containers-storage.conf(5) to see all available storage options.
	I0314 18:53:36.632261  976444 command_runner.go:130] > storage_option = [
	I0314 18:53:36.632407  976444 command_runner.go:130] > 	"overlay.mountopt=nodev,metacopy=on",
	I0314 18:53:36.632524  976444 command_runner.go:130] > ]
	I0314 18:53:36.632542  976444 command_runner.go:130] > # The default log directory where all logs will go unless directly specified by
	I0314 18:53:36.632552  976444 command_runner.go:130] > # the kubelet. The log directory specified must be an absolute directory.
	I0314 18:53:36.632945  976444 command_runner.go:130] > # log_dir = "/var/log/crio/pods"
	I0314 18:53:36.632959  976444 command_runner.go:130] > # Location for CRI-O to lay down the temporary version file.
	I0314 18:53:36.632967  976444 command_runner.go:130] > # It is used to check if crio wipe should wipe containers, which should
	I0314 18:53:36.632975  976444 command_runner.go:130] > # always happen on a node reboot
	I0314 18:53:36.633391  976444 command_runner.go:130] > # version_file = "/var/run/crio/version"
	I0314 18:53:36.633413  976444 command_runner.go:130] > # Location for CRI-O to lay down the persistent version file.
	I0314 18:53:36.633424  976444 command_runner.go:130] > # It is used to check if crio wipe should wipe images, which should
	I0314 18:53:36.633436  976444 command_runner.go:130] > # only happen when CRI-O has been upgraded
	I0314 18:53:36.633447  976444 command_runner.go:130] > version_file_persist = "/var/lib/crio/version"
	I0314 18:53:36.633463  976444 command_runner.go:130] > # InternalWipe is whether CRI-O should wipe containers and images after a reboot when the server starts.
	I0314 18:53:36.633480  976444 command_runner.go:130] > # If set to false, one must use the external command 'crio wipe' to wipe the containers and images in these situations.
	I0314 18:53:36.633796  976444 command_runner.go:130] > # internal_wipe = true
	I0314 18:53:36.633813  976444 command_runner.go:130] > # InternalRepair is whether CRI-O should check if the container and image storage was corrupted after a sudden restart.
	I0314 18:53:36.633821  976444 command_runner.go:130] > # If it was, CRI-O also attempts to repair the storage.
	I0314 18:53:36.634280  976444 command_runner.go:130] > # internal_repair = false
	I0314 18:53:36.634294  976444 command_runner.go:130] > # Location for CRI-O to lay down the clean shutdown file.
	I0314 18:53:36.634304  976444 command_runner.go:130] > # It is used to check whether crio had time to sync before shutting down.
	I0314 18:53:36.634313  976444 command_runner.go:130] > # If not found, crio wipe will clear the storage directory.
	I0314 18:53:36.634547  976444 command_runner.go:130] > # clean_shutdown_file = "/var/lib/crio/clean.shutdown"
	I0314 18:53:36.634565  976444 command_runner.go:130] > # The crio.api table contains settings for the kubelet/gRPC interface.
	I0314 18:53:36.634571  976444 command_runner.go:130] > [crio.api]
	I0314 18:53:36.634581  976444 command_runner.go:130] > # Path to AF_LOCAL socket on which CRI-O will listen.
	I0314 18:53:36.634878  976444 command_runner.go:130] > # listen = "/var/run/crio/crio.sock"
	I0314 18:53:36.634889  976444 command_runner.go:130] > # IP address on which the stream server will listen.
	I0314 18:53:36.635265  976444 command_runner.go:130] > # stream_address = "127.0.0.1"
	I0314 18:53:36.635291  976444 command_runner.go:130] > # The port on which the stream server will listen. If the port is set to "0", then
	I0314 18:53:36.635301  976444 command_runner.go:130] > # CRI-O will allocate a random free port number.
	I0314 18:53:36.635507  976444 command_runner.go:130] > # stream_port = "0"
	I0314 18:53:36.635520  976444 command_runner.go:130] > # Enable encrypted TLS transport of the stream server.
	I0314 18:53:36.635885  976444 command_runner.go:130] > # stream_enable_tls = false
	I0314 18:53:36.635909  976444 command_runner.go:130] > # Length of time until open streams terminate due to lack of activity
	I0314 18:53:36.636433  976444 command_runner.go:130] > # stream_idle_timeout = ""
	I0314 18:53:36.636454  976444 command_runner.go:130] > # Path to the x509 certificate file used to serve the encrypted stream. This
	I0314 18:53:36.636464  976444 command_runner.go:130] > # file can change, and CRI-O will automatically pick up the changes within 5
	I0314 18:53:36.636471  976444 command_runner.go:130] > # minutes.
	I0314 18:53:36.636646  976444 command_runner.go:130] > # stream_tls_cert = ""
	I0314 18:53:36.636667  976444 command_runner.go:130] > # Path to the key file used to serve the encrypted stream. This file can
	I0314 18:53:36.636677  976444 command_runner.go:130] > # change and CRI-O will automatically pick up the changes within 5 minutes.
	I0314 18:53:36.636911  976444 command_runner.go:130] > # stream_tls_key = ""
	I0314 18:53:36.636928  976444 command_runner.go:130] > # Path to the x509 CA(s) file used to verify and authenticate client
	I0314 18:53:36.636938  976444 command_runner.go:130] > # communication with the encrypted stream. This file can change and CRI-O will
	I0314 18:53:36.636972  976444 command_runner.go:130] > # automatically pick up the changes within 5 minutes.
	I0314 18:53:36.637279  976444 command_runner.go:130] > # stream_tls_ca = ""
	I0314 18:53:36.637297  976444 command_runner.go:130] > # Maximum grpc send message size in bytes. If not set or <=0, then CRI-O will default to 80 * 1024 * 1024.
	I0314 18:53:36.637435  976444 command_runner.go:130] > grpc_max_send_msg_size = 16777216
	I0314 18:53:36.637456  976444 command_runner.go:130] > # Maximum grpc receive message size. If not set or <= 0, then CRI-O will default to 80 * 1024 * 1024.
	I0314 18:53:36.637611  976444 command_runner.go:130] > grpc_max_recv_msg_size = 16777216
	I0314 18:53:36.637627  976444 command_runner.go:130] > # The crio.runtime table contains settings pertaining to the OCI runtime used
	I0314 18:53:36.637636  976444 command_runner.go:130] > # and options for how to set up and manage the OCI runtime.
	I0314 18:53:36.637643  976444 command_runner.go:130] > [crio.runtime]
	I0314 18:53:36.637663  976444 command_runner.go:130] > # A list of ulimits to be set in containers by default, specified as
	I0314 18:53:36.637677  976444 command_runner.go:130] > # "<ulimit name>=<soft limit>:<hard limit>", for example:
	I0314 18:53:36.637688  976444 command_runner.go:130] > # "nofile=1024:2048"
	I0314 18:53:36.637698  976444 command_runner.go:130] > # If nothing is set here, settings will be inherited from the CRI-O daemon
	I0314 18:53:36.637763  976444 command_runner.go:130] > # default_ulimits = [
	I0314 18:53:36.637947  976444 command_runner.go:130] > # ]
	I0314 18:53:36.637963  976444 command_runner.go:130] > # If true, the runtime will not use pivot_root, but instead use MS_MOVE.
	I0314 18:53:36.638301  976444 command_runner.go:130] > # no_pivot = false
	I0314 18:53:36.638322  976444 command_runner.go:130] > # decryption_keys_path is the path where the keys required for
	I0314 18:53:36.638332  976444 command_runner.go:130] > # image decryption are stored. This option supports live configuration reload.
	I0314 18:53:36.638345  976444 command_runner.go:130] > # decryption_keys_path = "/etc/crio/keys/"
	I0314 18:53:36.638354  976444 command_runner.go:130] > # Path to the conmon binary, used for monitoring the OCI runtime.
	I0314 18:53:36.638369  976444 command_runner.go:130] > # Will be searched for using $PATH if empty.
	I0314 18:53:36.638384  976444 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0314 18:53:36.638402  976444 command_runner.go:130] > conmon = "/usr/libexec/crio/conmon"
	I0314 18:53:36.638412  976444 command_runner.go:130] > # Cgroup setting for conmon
	I0314 18:53:36.638426  976444 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorCgroup.
	I0314 18:53:36.638445  976444 command_runner.go:130] > conmon_cgroup = "pod"
	I0314 18:53:36.638458  976444 command_runner.go:130] > # Environment variable list for the conmon process, used for passing necessary
	I0314 18:53:36.638468  976444 command_runner.go:130] > # environment variables to conmon or the runtime.
	I0314 18:53:36.638478  976444 command_runner.go:130] > # This option is currently deprecated, and will be replaced with RuntimeHandler.MonitorEnv.
	I0314 18:53:36.638488  976444 command_runner.go:130] > conmon_env = [
	I0314 18:53:36.638501  976444 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0314 18:53:36.638510  976444 command_runner.go:130] > ]
	I0314 18:53:36.638518  976444 command_runner.go:130] > # Additional environment variables to set for all the
	I0314 18:53:36.638530  976444 command_runner.go:130] > # containers. These are overridden if set in the
	I0314 18:53:36.638539  976444 command_runner.go:130] > # container image spec or in the container runtime configuration.
	I0314 18:53:36.638546  976444 command_runner.go:130] > # default_env = [
	I0314 18:53:36.638550  976444 command_runner.go:130] > # ]
	I0314 18:53:36.638560  976444 command_runner.go:130] > # If true, SELinux will be used for pod separation on the host.
	I0314 18:53:36.638579  976444 command_runner.go:130] > # This option is deprecated, and be interpreted from whether SELinux is enabled on the host in the future.
	I0314 18:53:36.638588  976444 command_runner.go:130] > # selinux = false
	I0314 18:53:36.638598  976444 command_runner.go:130] > # Path to the seccomp.json profile which is used as the default seccomp profile
	I0314 18:53:36.638607  976444 command_runner.go:130] > # for the runtime. If not specified, then the internal default seccomp profile
	I0314 18:53:36.638620  976444 command_runner.go:130] > # will be used. This option supports live configuration reload.
	I0314 18:53:36.638625  976444 command_runner.go:130] > # seccomp_profile = ""
	I0314 18:53:36.638636  976444 command_runner.go:130] > # Changes the meaning of an empty seccomp profile. By default
	I0314 18:53:36.638645  976444 command_runner.go:130] > # (and according to CRI spec), an empty profile means unconfined.
	I0314 18:53:36.638657  976444 command_runner.go:130] > # This option tells CRI-O to treat an empty profile as the default profile,
	I0314 18:53:36.638666  976444 command_runner.go:130] > # which might increase security.
	I0314 18:53:36.638673  976444 command_runner.go:130] > # This option is currently deprecated,
	I0314 18:53:36.638686  976444 command_runner.go:130] > # and will be replaced by the SeccompDefault FeatureGate in Kubernetes.
	I0314 18:53:36.638694  976444 command_runner.go:130] > seccomp_use_default_when_empty = false
	I0314 18:53:36.638717  976444 command_runner.go:130] > # Used to change the name of the default AppArmor profile of CRI-O. The default
	I0314 18:53:36.638730  976444 command_runner.go:130] > # profile name is "crio-default". This profile only takes effect if the user
	I0314 18:53:36.638743  976444 command_runner.go:130] > # does not specify a profile via the Kubernetes Pod's metadata annotation. If
	I0314 18:53:36.638753  976444 command_runner.go:130] > # the profile is set to "unconfined", then this equals to disabling AppArmor.
	I0314 18:53:36.638764  976444 command_runner.go:130] > # This option supports live configuration reload.
	I0314 18:53:36.638774  976444 command_runner.go:130] > # apparmor_profile = "crio-default"
	I0314 18:53:36.638789  976444 command_runner.go:130] > # Path to the blockio class configuration file for configuring
	I0314 18:53:36.638799  976444 command_runner.go:130] > # the cgroup blockio controller.
	I0314 18:53:36.638806  976444 command_runner.go:130] > # blockio_config_file = ""
	I0314 18:53:36.638820  976444 command_runner.go:130] > # Reload blockio-config-file and rescan blockio devices in the system before applying
	I0314 18:53:36.638839  976444 command_runner.go:130] > # blockio parameters.
	I0314 18:53:36.638849  976444 command_runner.go:130] > # blockio_reload = false
	I0314 18:53:36.638859  976444 command_runner.go:130] > # Used to change irqbalance service config file path which is used for configuring
	I0314 18:53:36.638868  976444 command_runner.go:130] > # irqbalance daemon.
	I0314 18:53:36.638876  976444 command_runner.go:130] > # irqbalance_config_file = "/etc/sysconfig/irqbalance"
	I0314 18:53:36.638888  976444 command_runner.go:130] > # irqbalance_config_restore_file allows to set a cpu mask CRI-O should
	I0314 18:53:36.638898  976444 command_runner.go:130] > # restore as irqbalance config at startup. Set to empty string to disable this flow entirely.
	I0314 18:53:36.638911  976444 command_runner.go:130] > # By default, CRI-O manages the irqbalance configuration to enable dynamic IRQ pinning.
	I0314 18:53:36.638924  976444 command_runner.go:130] > # irqbalance_config_restore_file = "/etc/sysconfig/orig_irq_banned_cpus"
	I0314 18:53:36.638934  976444 command_runner.go:130] > # Path to the RDT configuration file for configuring the resctrl pseudo-filesystem.
	I0314 18:53:36.638942  976444 command_runner.go:130] > # This option supports live configuration reload.
	I0314 18:53:36.638948  976444 command_runner.go:130] > # rdt_config_file = ""
	I0314 18:53:36.638956  976444 command_runner.go:130] > # Cgroup management implementation used for the runtime.
	I0314 18:53:36.638964  976444 command_runner.go:130] > cgroup_manager = "cgroupfs"
	I0314 18:53:36.639002  976444 command_runner.go:130] > # Specify whether the image pull must be performed in a separate cgroup.
	I0314 18:53:36.639011  976444 command_runner.go:130] > # separate_pull_cgroup = ""
	I0314 18:53:36.639021  976444 command_runner.go:130] > # List of default capabilities for containers. If it is empty or commented out,
	I0314 18:53:36.639034  976444 command_runner.go:130] > # only the capabilities defined in the containers json file by the user/kube
	I0314 18:53:36.639040  976444 command_runner.go:130] > # will be added.
	I0314 18:53:36.639050  976444 command_runner.go:130] > # default_capabilities = [
	I0314 18:53:36.639055  976444 command_runner.go:130] > # 	"CHOWN",
	I0314 18:53:36.639064  976444 command_runner.go:130] > # 	"DAC_OVERRIDE",
	I0314 18:53:36.639070  976444 command_runner.go:130] > # 	"FSETID",
	I0314 18:53:36.639079  976444 command_runner.go:130] > # 	"FOWNER",
	I0314 18:53:36.639085  976444 command_runner.go:130] > # 	"SETGID",
	I0314 18:53:36.639094  976444 command_runner.go:130] > # 	"SETUID",
	I0314 18:53:36.639100  976444 command_runner.go:130] > # 	"SETPCAP",
	I0314 18:53:36.639105  976444 command_runner.go:130] > # 	"NET_BIND_SERVICE",
	I0314 18:53:36.639110  976444 command_runner.go:130] > # 	"KILL",
	I0314 18:53:36.639115  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639125  976444 command_runner.go:130] > # Add capabilities to the inheritable set, as well as the default group of permitted, bounding and effective.
	I0314 18:53:36.639143  976444 command_runner.go:130] > # If capabilities are expected to work for non-root users, this option should be set.
	I0314 18:53:36.639151  976444 command_runner.go:130] > # add_inheritable_capabilities = false
	I0314 18:53:36.639159  976444 command_runner.go:130] > # List of default sysctls. If it is empty or commented out, only the sysctls
	I0314 18:53:36.639172  976444 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0314 18:53:36.639179  976444 command_runner.go:130] > # default_sysctls = [
	I0314 18:53:36.639194  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639203  976444 command_runner.go:130] > # List of devices on the host that a
	I0314 18:53:36.639212  976444 command_runner.go:130] > # user can specify with the "io.kubernetes.cri-o.Devices" allowed annotation.
	I0314 18:53:36.639221  976444 command_runner.go:130] > # allowed_devices = [
	I0314 18:53:36.639227  976444 command_runner.go:130] > # 	"/dev/fuse",
	I0314 18:53:36.639233  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639240  976444 command_runner.go:130] > # List of additional devices. specified as
	I0314 18:53:36.639254  976444 command_runner.go:130] > # "<device-on-host>:<device-on-container>:<permissions>", for example: "--device=/dev/sdc:/dev/xvdc:rwm".
	I0314 18:53:36.639266  976444 command_runner.go:130] > # If it is empty or commented out, only the devices
	I0314 18:53:36.639274  976444 command_runner.go:130] > # defined in the container json file by the user/kube will be added.
	I0314 18:53:36.639284  976444 command_runner.go:130] > # additional_devices = [
	I0314 18:53:36.639289  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639299  976444 command_runner.go:130] > # List of directories to scan for CDI Spec files.
	I0314 18:53:36.639305  976444 command_runner.go:130] > # cdi_spec_dirs = [
	I0314 18:53:36.639314  976444 command_runner.go:130] > # 	"/etc/cdi",
	I0314 18:53:36.639320  976444 command_runner.go:130] > # 	"/var/run/cdi",
	I0314 18:53:36.639326  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639336  976444 command_runner.go:130] > # Change the default behavior of setting container devices uid/gid from CRI's
	I0314 18:53:36.639348  976444 command_runner.go:130] > # SecurityContext (RunAsUser/RunAsGroup) instead of taking host's uid/gid.
	I0314 18:53:36.639354  976444 command_runner.go:130] > # Defaults to false.
	I0314 18:53:36.639365  976444 command_runner.go:130] > # device_ownership_from_security_context = false
	I0314 18:53:36.639374  976444 command_runner.go:130] > # Path to OCI hooks directories for automatically executed hooks. If one of the
	I0314 18:53:36.639386  976444 command_runner.go:130] > # directories does not exist, then CRI-O will automatically skip them.
	I0314 18:53:36.639392  976444 command_runner.go:130] > # hooks_dir = [
	I0314 18:53:36.639402  976444 command_runner.go:130] > # 	"/usr/share/containers/oci/hooks.d",
	I0314 18:53:36.639408  976444 command_runner.go:130] > # ]
	I0314 18:53:36.639416  976444 command_runner.go:130] > # Path to the file specifying the defaults mounts for each container. The
	I0314 18:53:36.639429  976444 command_runner.go:130] > # format of the config is /SRC:/DST, one mount per line. Notice that CRI-O reads
	I0314 18:53:36.639437  976444 command_runner.go:130] > # its default mounts from the following two files:
	I0314 18:53:36.639442  976444 command_runner.go:130] > #
	I0314 18:53:36.639451  976444 command_runner.go:130] > #   1) /etc/containers/mounts.conf (i.e., default_mounts_file): This is the
	I0314 18:53:36.639465  976444 command_runner.go:130] > #      override file, where users can either add in their own default mounts, or
	I0314 18:53:36.639477  976444 command_runner.go:130] > #      override the default mounts shipped with the package.
	I0314 18:53:36.639483  976444 command_runner.go:130] > #
	I0314 18:53:36.639490  976444 command_runner.go:130] > #   2) /usr/share/containers/mounts.conf: This is the default file read for
	I0314 18:53:36.639504  976444 command_runner.go:130] > #      mounts. If you want CRI-O to read from a different, specific mounts file,
	I0314 18:53:36.639523  976444 command_runner.go:130] > #      you can change the default_mounts_file. Note, if this is done, CRI-O will
	I0314 18:53:36.639538  976444 command_runner.go:130] > #      only add mounts it finds in this file.
	I0314 18:53:36.639542  976444 command_runner.go:130] > #
	I0314 18:53:36.639549  976444 command_runner.go:130] > # default_mounts_file = ""
	I0314 18:53:36.639558  976444 command_runner.go:130] > # Maximum number of processes allowed in a container.
	I0314 18:53:36.639568  976444 command_runner.go:130] > # This option is deprecated. The Kubelet flag '--pod-pids-limit' should be used instead.
	I0314 18:53:36.639577  976444 command_runner.go:130] > pids_limit = 1024
	I0314 18:53:36.639587  976444 command_runner.go:130] > # Maximum sized allowed for the container log file. Negative numbers indicate
	I0314 18:53:36.639597  976444 command_runner.go:130] > # that no size limit is imposed. If it is positive, it must be >= 8192 to
	I0314 18:53:36.639605  976444 command_runner.go:130] > # match/exceed conmon's read buffer. The file is truncated and re-opened so the
	I0314 18:53:36.639617  976444 command_runner.go:130] > # limit is never exceeded. This option is deprecated. The Kubelet flag '--container-log-max-size' should be used instead.
	I0314 18:53:36.639622  976444 command_runner.go:130] > # log_size_max = -1
	I0314 18:53:36.639632  976444 command_runner.go:130] > # Whether container output should be logged to journald in addition to the kubernetes log file
	I0314 18:53:36.639641  976444 command_runner.go:130] > # log_to_journald = false
	I0314 18:53:36.639650  976444 command_runner.go:130] > # Path to directory in which container exit files are written to by conmon.
	I0314 18:53:36.639661  976444 command_runner.go:130] > # container_exits_dir = "/var/run/crio/exits"
	I0314 18:53:36.639668  976444 command_runner.go:130] > # Path to directory for container attach sockets.
	I0314 18:53:36.639679  976444 command_runner.go:130] > # container_attach_socket_dir = "/var/run/crio"
	I0314 18:53:36.639687  976444 command_runner.go:130] > # The prefix to use for the source of the bind mounts.
	I0314 18:53:36.639701  976444 command_runner.go:130] > # bind_mount_prefix = ""
	I0314 18:53:36.639712  976444 command_runner.go:130] > # If set to true, all containers will run in read-only mode.
	I0314 18:53:36.639718  976444 command_runner.go:130] > # read_only = false
	I0314 18:53:36.639730  976444 command_runner.go:130] > # Changes the verbosity of the logs based on the level it is set to. Options
	I0314 18:53:36.639739  976444 command_runner.go:130] > # are fatal, panic, error, warn, info, debug and trace. This option supports
	I0314 18:53:36.639749  976444 command_runner.go:130] > # live configuration reload.
	I0314 18:53:36.639755  976444 command_runner.go:130] > # log_level = "info"
	I0314 18:53:36.639766  976444 command_runner.go:130] > # Filter the log messages by the provided regular expression.
	I0314 18:53:36.639774  976444 command_runner.go:130] > # This option supports live configuration reload.
	I0314 18:53:36.639780  976444 command_runner.go:130] > # log_filter = ""
	I0314 18:53:36.639789  976444 command_runner.go:130] > # The UID mappings for the user namespace of each container. A range is
	I0314 18:53:36.639804  976444 command_runner.go:130] > # specified in the form containerUID:HostUID:Size. Multiple ranges must be
	I0314 18:53:36.639814  976444 command_runner.go:130] > # separated by comma.
	I0314 18:53:36.639825  976444 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 18:53:36.639834  976444 command_runner.go:130] > # uid_mappings = ""
	I0314 18:53:36.639844  976444 command_runner.go:130] > # The GID mappings for the user namespace of each container. A range is
	I0314 18:53:36.639856  976444 command_runner.go:130] > # specified in the form containerGID:HostGID:Size. Multiple ranges must be
	I0314 18:53:36.639870  976444 command_runner.go:130] > # separated by comma.
	I0314 18:53:36.639893  976444 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 18:53:36.639902  976444 command_runner.go:130] > # gid_mappings = ""
	I0314 18:53:36.639912  976444 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host UIDs below this value
	I0314 18:53:36.639924  976444 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0314 18:53:36.639935  976444 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0314 18:53:36.639949  976444 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 18:53:36.639958  976444 command_runner.go:130] > # minimum_mappable_uid = -1
	I0314 18:53:36.639966  976444 command_runner.go:130] > # If set, CRI-O will reject any attempt to map host GIDs below this value
	I0314 18:53:36.639979  976444 command_runner.go:130] > # into user namespaces.  A negative value indicates that no minimum is set,
	I0314 18:53:36.639990  976444 command_runner.go:130] > # so specifying mappings will only be allowed for pods that run as UID 0.
	I0314 18:53:36.640003  976444 command_runner.go:130] > # This option is deprecated, and will be replaced with Kubernetes user namespace support (KEP-127) in the future.
	I0314 18:53:36.640013  976444 command_runner.go:130] > # minimum_mappable_gid = -1
	I0314 18:53:36.640022  976444 command_runner.go:130] > # The minimal amount of time in seconds to wait before issuing a timeout
	I0314 18:53:36.640034  976444 command_runner.go:130] > # regarding the proper termination of the container. The lowest possible
	I0314 18:53:36.640043  976444 command_runner.go:130] > # value is 30s, whereas lower values are not considered by CRI-O.
	I0314 18:53:36.640048  976444 command_runner.go:130] > # ctr_stop_timeout = 30
	I0314 18:53:36.640059  976444 command_runner.go:130] > # drop_infra_ctr determines whether CRI-O drops the infra container
	I0314 18:53:36.640068  976444 command_runner.go:130] > # when a pod does not have a private PID namespace, and does not use
	I0314 18:53:36.640078  976444 command_runner.go:130] > # a kernel separating runtime (like kata).
	I0314 18:53:36.640086  976444 command_runner.go:130] > # It requires manage_ns_lifecycle to be true.
	I0314 18:53:36.640094  976444 command_runner.go:130] > drop_infra_ctr = false
	I0314 18:53:36.640103  976444 command_runner.go:130] > # infra_ctr_cpuset determines what CPUs will be used to run infra containers.
	I0314 18:53:36.640111  976444 command_runner.go:130] > # You can use linux CPU list format to specify desired CPUs.
	I0314 18:53:36.640121  976444 command_runner.go:130] > # To get better isolation for guaranteed pods, set this parameter to be equal to kubelet reserved-cpus.
	I0314 18:53:36.640131  976444 command_runner.go:130] > # infra_ctr_cpuset = ""
	I0314 18:53:36.640142  976444 command_runner.go:130] > # shared_cpuset  determines the CPU set which is allowed to be shared between guaranteed containers,
	I0314 18:53:36.640154  976444 command_runner.go:130] > # regardless of, and in addition to, the exclusiveness of their CPUs.
	I0314 18:53:36.640163  976444 command_runner.go:130] > # This field is optional and would not be used if not specified.
	I0314 18:53:36.640173  976444 command_runner.go:130] > # You can specify CPUs in the Linux CPU list format.
	I0314 18:53:36.640179  976444 command_runner.go:130] > # shared_cpuset = ""
	I0314 18:53:36.640189  976444 command_runner.go:130] > # The directory where the state of the managed namespaces gets tracked.
	I0314 18:53:36.640196  976444 command_runner.go:130] > # Only used when manage_ns_lifecycle is true.
	I0314 18:53:36.640203  976444 command_runner.go:130] > # namespaces_dir = "/var/run"
	I0314 18:53:36.640231  976444 command_runner.go:130] > # pinns_path is the path to find the pinns binary, which is needed to manage namespace lifecycle
	I0314 18:53:36.640249  976444 command_runner.go:130] > pinns_path = "/usr/bin/pinns"
	I0314 18:53:36.640265  976444 command_runner.go:130] > # Globally enable/disable CRIU support which is necessary to
	I0314 18:53:36.640273  976444 command_runner.go:130] > # checkpoint and restore container or pods (even if CRIU is found in $PATH).
	I0314 18:53:36.640278  976444 command_runner.go:130] > # enable_criu_support = false
	I0314 18:53:36.640286  976444 command_runner.go:130] > # Enable/disable the generation of the container,
	I0314 18:53:36.640312  976444 command_runner.go:130] > # sandbox lifecycle events to be sent to the Kubelet to optimize the PLEG
	I0314 18:53:36.640323  976444 command_runner.go:130] > # enable_pod_events = false
	I0314 18:53:36.640332  976444 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0314 18:53:36.640357  976444 command_runner.go:130] > # default_runtime is the _name_ of the OCI runtime to be used as the default.
	I0314 18:53:36.640371  976444 command_runner.go:130] > # The name is matched against the runtimes map below.
	I0314 18:53:36.640378  976444 command_runner.go:130] > # default_runtime = "runc"
	I0314 18:53:36.640385  976444 command_runner.go:130] > # A list of paths that, when absent from the host,
	I0314 18:53:36.640396  976444 command_runner.go:130] > # will cause a container creation to fail (as opposed to the current behavior being created as a directory).
	I0314 18:53:36.640409  976444 command_runner.go:130] > # This option is to protect from source locations whose existence as a directory could jeopardize the health of the node, and whose
	I0314 18:53:36.640416  976444 command_runner.go:130] > # creation as a file is not desired either.
	I0314 18:53:36.640428  976444 command_runner.go:130] > # An example is /etc/hostname, which will cause failures on reboot if it's created as a directory, but often doesn't exist because
	I0314 18:53:36.640435  976444 command_runner.go:130] > # the hostname is being managed dynamically.
	I0314 18:53:36.640442  976444 command_runner.go:130] > # absent_mount_sources_to_reject = [
	I0314 18:53:36.640450  976444 command_runner.go:130] > # ]
	I0314 18:53:36.640461  976444 command_runner.go:130] > # The "crio.runtime.runtimes" table defines a list of OCI compatible runtimes.
	I0314 18:53:36.640472  976444 command_runner.go:130] > # The runtime to use is picked based on the runtime handler provided by the CRI.
	I0314 18:53:36.640481  976444 command_runner.go:130] > # If no runtime handler is provided, the "default_runtime" will be used.
	I0314 18:53:36.640486  976444 command_runner.go:130] > # Each entry in the table should follow the format:
	I0314 18:53:36.640492  976444 command_runner.go:130] > #
	I0314 18:53:36.640496  976444 command_runner.go:130] > # [crio.runtime.runtimes.runtime-handler]
	I0314 18:53:36.640501  976444 command_runner.go:130] > # runtime_path = "/path/to/the/executable"
	I0314 18:53:36.640505  976444 command_runner.go:130] > # runtime_type = "oci"
	I0314 18:53:36.640555  976444 command_runner.go:130] > # runtime_root = "/path/to/the/root"
	I0314 18:53:36.640562  976444 command_runner.go:130] > # monitor_path = "/path/to/container/monitor"
	I0314 18:53:36.640567  976444 command_runner.go:130] > # monitor_cgroup = "/cgroup/path"
	I0314 18:53:36.640574  976444 command_runner.go:130] > # monitor_exec_cgroup = "/cgroup/path"
	I0314 18:53:36.640578  976444 command_runner.go:130] > # monitor_env = []
	I0314 18:53:36.640582  976444 command_runner.go:130] > # privileged_without_host_devices = false
	I0314 18:53:36.640587  976444 command_runner.go:130] > # allowed_annotations = []
	I0314 18:53:36.640592  976444 command_runner.go:130] > # platform_runtime_paths = { "os/arch" = "/path/to/binary" }
	I0314 18:53:36.640595  976444 command_runner.go:130] > # Where:
	I0314 18:53:36.640600  976444 command_runner.go:130] > # - runtime-handler: Name used to identify the runtime.
	I0314 18:53:36.640615  976444 command_runner.go:130] > # - runtime_path (optional, string): Absolute path to the runtime executable in
	I0314 18:53:36.640623  976444 command_runner.go:130] > #   the host filesystem. If omitted, the runtime-handler identifier should match
	I0314 18:53:36.640630  976444 command_runner.go:130] > #   the runtime executable name, and the runtime executable should be placed
	I0314 18:53:36.640636  976444 command_runner.go:130] > #   in $PATH.
	I0314 18:53:36.640641  976444 command_runner.go:130] > # - runtime_type (optional, string): Type of runtime, one of: "oci", "vm". If
	I0314 18:53:36.640648  976444 command_runner.go:130] > #   omitted, an "oci" runtime is assumed.
	I0314 18:53:36.640654  976444 command_runner.go:130] > # - runtime_root (optional, string): Root directory for storage of containers
	I0314 18:53:36.640658  976444 command_runner.go:130] > #   state.
	I0314 18:53:36.640664  976444 command_runner.go:130] > # - runtime_config_path (optional, string): the path for the runtime configuration
	I0314 18:53:36.640672  976444 command_runner.go:130] > #   file. This can only be used with when using the VM runtime_type.
	I0314 18:53:36.640677  976444 command_runner.go:130] > # - privileged_without_host_devices (optional, bool): an option for restricting
	I0314 18:53:36.640682  976444 command_runner.go:130] > #   host devices from being passed to privileged containers.
	I0314 18:53:36.640690  976444 command_runner.go:130] > # - allowed_annotations (optional, array of strings): an option for specifying
	I0314 18:53:36.640701  976444 command_runner.go:130] > #   a list of experimental annotations that this runtime handler is allowed to process.
	I0314 18:53:36.640708  976444 command_runner.go:130] > #   The currently recognized values are:
	I0314 18:53:36.640714  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.userns-mode" for configuring a user namespace for the pod.
	I0314 18:53:36.640722  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.cgroup2-mount-hierarchy-rw" for mounting cgroups writably when set to "true".
	I0314 18:53:36.640728  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.Devices" for configuring devices for the pod.
	I0314 18:53:36.640735  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.ShmSize" for configuring the size of /dev/shm.
	I0314 18:53:36.640744  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.UnifiedCgroup.$CTR_NAME" for configuring the cgroup v2 unified block for a container.
	I0314 18:53:36.640749  976444 command_runner.go:130] > #   "io.containers.trace-syscall" for tracing syscalls via the OCI seccomp BPF hook.
	I0314 18:53:36.640758  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.seccompNotifierAction" for enabling the seccomp notifier feature.
	I0314 18:53:36.640764  976444 command_runner.go:130] > #   "io.kubernetes.cri-o.umask" for setting the umask for container init process.
	I0314 18:53:36.640769  976444 command_runner.go:130] > #   "io.kubernetes.cri.rdt-class" for setting the RDT class of a container
	I0314 18:53:36.640777  976444 command_runner.go:130] > # - monitor_path (optional, string): The path of the monitor binary. Replaces
	I0314 18:53:36.640781  976444 command_runner.go:130] > #   deprecated option "conmon".
	I0314 18:53:36.640790  976444 command_runner.go:130] > # - monitor_cgroup (optional, string): The cgroup the container monitor process will be put in.
	I0314 18:53:36.640796  976444 command_runner.go:130] > #   Replaces deprecated option "conmon_cgroup".
	I0314 18:53:36.640802  976444 command_runner.go:130] > # - monitor_exec_cgroup (optional, string): If set to "container", indicates exec probes
	I0314 18:53:36.640809  976444 command_runner.go:130] > #   should be moved to the container's cgroup
	I0314 18:53:36.640816  976444 command_runner.go:130] > # - monitor_env (optional, array of strings): Environment variables to pass to the montior.
	I0314 18:53:36.640823  976444 command_runner.go:130] > #   Replaces deprecated option "conmon_env".
	I0314 18:53:36.640829  976444 command_runner.go:130] > # - platform_runtime_paths (optional, map): A mapping of platforms to the corresponding
	I0314 18:53:36.640836  976444 command_runner.go:130] > #   runtime executable paths for the runtime handler.
	I0314 18:53:36.640839  976444 command_runner.go:130] > #
	I0314 18:53:36.640844  976444 command_runner.go:130] > # Using the seccomp notifier feature:
	I0314 18:53:36.640851  976444 command_runner.go:130] > #
	I0314 18:53:36.640859  976444 command_runner.go:130] > # This feature can help you to debug seccomp related issues, for example if
	I0314 18:53:36.640865  976444 command_runner.go:130] > # blocked syscalls (permission denied errors) have negative impact on the workload.
	I0314 18:53:36.640871  976444 command_runner.go:130] > #
	I0314 18:53:36.640876  976444 command_runner.go:130] > # To be able to use this feature, configure a runtime which has the annotation
	I0314 18:53:36.640883  976444 command_runner.go:130] > # "io.kubernetes.cri-o.seccompNotifierAction" in the allowed_annotations array.
	I0314 18:53:36.640886  976444 command_runner.go:130] > #
	I0314 18:53:36.640891  976444 command_runner.go:130] > # It also requires at least runc 1.1.0 or crun 0.19 which support the notifier
	I0314 18:53:36.640898  976444 command_runner.go:130] > # feature.
	I0314 18:53:36.640900  976444 command_runner.go:130] > #
	I0314 18:53:36.640906  976444 command_runner.go:130] > # If everything is setup, CRI-O will modify chosen seccomp profiles for
	I0314 18:53:36.640914  976444 command_runner.go:130] > # containers if the annotation "io.kubernetes.cri-o.seccompNotifierAction" is
	I0314 18:53:36.640920  976444 command_runner.go:130] > # set on the Pod sandbox. CRI-O will then get notified if a container is using
	I0314 18:53:36.640928  976444 command_runner.go:130] > # a blocked syscall and then terminate the workload after a timeout of 5
	I0314 18:53:36.640933  976444 command_runner.go:130] > # seconds if the value of "io.kubernetes.cri-o.seccompNotifierAction=stop".
	I0314 18:53:36.640936  976444 command_runner.go:130] > #
	I0314 18:53:36.640941  976444 command_runner.go:130] > # This also means that multiple syscalls can be captured during that period,
	I0314 18:53:36.640949  976444 command_runner.go:130] > # while the timeout will get reset once a new syscall has been discovered.
	I0314 18:53:36.640953  976444 command_runner.go:130] > #
	I0314 18:53:36.640964  976444 command_runner.go:130] > # This also means that the Pods "restartPolicy" has to be set to "Never",
	I0314 18:53:36.640975  976444 command_runner.go:130] > # otherwise the kubelet will restart the container immediately.
	I0314 18:53:36.640983  976444 command_runner.go:130] > #
	I0314 18:53:36.640993  976444 command_runner.go:130] > # Please be aware that CRI-O is not able to get notified if a syscall gets
	I0314 18:53:36.641004  976444 command_runner.go:130] > # blocked based on the seccomp defaultAction, which is a general runtime
	I0314 18:53:36.641012  976444 command_runner.go:130] > # limitation.
	I0314 18:53:36.641018  976444 command_runner.go:130] > [crio.runtime.runtimes.runc]
	I0314 18:53:36.641027  976444 command_runner.go:130] > runtime_path = "/usr/bin/runc"
	I0314 18:53:36.641036  976444 command_runner.go:130] > runtime_type = "oci"
	I0314 18:53:36.641043  976444 command_runner.go:130] > runtime_root = "/run/runc"
	I0314 18:53:36.641052  976444 command_runner.go:130] > runtime_config_path = ""
	I0314 18:53:36.641058  976444 command_runner.go:130] > monitor_path = "/usr/libexec/crio/conmon"
	I0314 18:53:36.641067  976444 command_runner.go:130] > monitor_cgroup = "pod"
	I0314 18:53:36.641072  976444 command_runner.go:130] > monitor_exec_cgroup = ""
	I0314 18:53:36.641078  976444 command_runner.go:130] > monitor_env = [
	I0314 18:53:36.641083  976444 command_runner.go:130] > 	"PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin",
	I0314 18:53:36.641087  976444 command_runner.go:130] > ]
	I0314 18:53:36.641098  976444 command_runner.go:130] > privileged_without_host_devices = false
	I0314 18:53:36.641107  976444 command_runner.go:130] > # The workloads table defines ways to customize containers with different resources
	I0314 18:53:36.641112  976444 command_runner.go:130] > # that work based on annotations, rather than the CRI.
	I0314 18:53:36.641117  976444 command_runner.go:130] > # Note, the behavior of this table is EXPERIMENTAL and may change at any time.
	I0314 18:53:36.641125  976444 command_runner.go:130] > # Each workload, has a name, activation_annotation, annotation_prefix and set of resources it supports mutating.
	I0314 18:53:36.641135  976444 command_runner.go:130] > # The currently supported resources are "cpu" (to configure the cpu shares) and "cpuset" to configure the cpuset.
	I0314 18:53:36.641141  976444 command_runner.go:130] > # Each resource can have a default value specified, or be empty.
	I0314 18:53:36.641152  976444 command_runner.go:130] > # For a container to opt-into this workload, the pod should be configured with the annotation $activation_annotation (key only, value is ignored).
	I0314 18:53:36.641163  976444 command_runner.go:130] > # To customize per-container, an annotation of the form $annotation_prefix.$resource/$ctrName = "value" can be specified
	I0314 18:53:36.641170  976444 command_runner.go:130] > # signifying for that resource type to override the default value.
	I0314 18:53:36.641179  976444 command_runner.go:130] > # If the annotation_prefix is not present, every container in the pod will be given the default values.
	I0314 18:53:36.641183  976444 command_runner.go:130] > # Example:
	I0314 18:53:36.641187  976444 command_runner.go:130] > # [crio.runtime.workloads.workload-type]
	I0314 18:53:36.641194  976444 command_runner.go:130] > # activation_annotation = "io.crio/workload"
	I0314 18:53:36.641198  976444 command_runner.go:130] > # annotation_prefix = "io.crio.workload-type"
	I0314 18:53:36.641203  976444 command_runner.go:130] > # [crio.runtime.workloads.workload-type.resources]
	I0314 18:53:36.641206  976444 command_runner.go:130] > # cpuset = 0
	I0314 18:53:36.641210  976444 command_runner.go:130] > # cpushares = "0-1"
	I0314 18:53:36.641213  976444 command_runner.go:130] > # Where:
	I0314 18:53:36.641217  976444 command_runner.go:130] > # The workload name is workload-type.
	I0314 18:53:36.641223  976444 command_runner.go:130] > # To specify, the pod must have the "io.crio.workload" annotation (this is a precise string match).
	I0314 18:53:36.641228  976444 command_runner.go:130] > # This workload supports setting cpuset and cpu resources.
	I0314 18:53:36.641233  976444 command_runner.go:130] > # annotation_prefix is used to customize the different resources.
	I0314 18:53:36.641240  976444 command_runner.go:130] > # To configure the cpu shares a container gets in the example above, the pod would have to have the following annotation:
	I0314 18:53:36.641244  976444 command_runner.go:130] > # "io.crio.workload-type/$container_name = {"cpushares": "value"}"
	I0314 18:53:36.641249  976444 command_runner.go:130] > # hostnetwork_disable_selinux determines whether
	I0314 18:53:36.641255  976444 command_runner.go:130] > # SELinux should be disabled within a pod when it is running in the host network namespace
	I0314 18:53:36.641258  976444 command_runner.go:130] > # Default value is set to true
	I0314 18:53:36.641262  976444 command_runner.go:130] > # hostnetwork_disable_selinux = true
	I0314 18:53:36.641267  976444 command_runner.go:130] > # disable_hostport_mapping determines whether to enable/disable
	I0314 18:53:36.641271  976444 command_runner.go:130] > # the container hostport mapping in CRI-O.
	I0314 18:53:36.641280  976444 command_runner.go:130] > # Default value is set to 'false'
	I0314 18:53:36.641285  976444 command_runner.go:130] > # disable_hostport_mapping = false
	I0314 18:53:36.641291  976444 command_runner.go:130] > # The crio.image table contains settings pertaining to the management of OCI images.
	I0314 18:53:36.641293  976444 command_runner.go:130] > #
	I0314 18:53:36.641298  976444 command_runner.go:130] > # CRI-O reads its configured registries defaults from the system wide
	I0314 18:53:36.641308  976444 command_runner.go:130] > # containers-registries.conf(5) located in /etc/containers/registries.conf. If
	I0314 18:53:36.641314  976444 command_runner.go:130] > # you want to modify just CRI-O, you can change the registries configuration in
	I0314 18:53:36.641320  976444 command_runner.go:130] > # this file. Otherwise, leave insecure_registries and registries commented out to
	I0314 18:53:36.641324  976444 command_runner.go:130] > # use the system's defaults from /etc/containers/registries.conf.
	I0314 18:53:36.641327  976444 command_runner.go:130] > [crio.image]
	I0314 18:53:36.641332  976444 command_runner.go:130] > # Default transport for pulling images from a remote container storage.
	I0314 18:53:36.641336  976444 command_runner.go:130] > # default_transport = "docker://"
	I0314 18:53:36.641342  976444 command_runner.go:130] > # The path to a file containing credentials necessary for pulling images from
	I0314 18:53:36.641347  976444 command_runner.go:130] > # secure registries. The file is similar to that of /var/lib/kubelet/config.json
	I0314 18:53:36.641350  976444 command_runner.go:130] > # global_auth_file = ""
	I0314 18:53:36.641355  976444 command_runner.go:130] > # The image used to instantiate infra containers.
	I0314 18:53:36.641362  976444 command_runner.go:130] > # This option supports live configuration reload.
	I0314 18:53:36.641369  976444 command_runner.go:130] > # pause_image = "registry.k8s.io/pause:3.9"
	I0314 18:53:36.641378  976444 command_runner.go:130] > # The path to a file containing credentials specific for pulling the pause_image from
	I0314 18:53:36.641387  976444 command_runner.go:130] > # above. The file is similar to that of /var/lib/kubelet/config.json
	I0314 18:53:36.641394  976444 command_runner.go:130] > # This option supports live configuration reload.
	I0314 18:53:36.641400  976444 command_runner.go:130] > # pause_image_auth_file = ""
	I0314 18:53:36.641407  976444 command_runner.go:130] > # The command to run to have a container stay in the paused state.
	I0314 18:53:36.641417  976444 command_runner.go:130] > # When explicitly set to "", it will fallback to the entrypoint and command
	I0314 18:53:36.641426  976444 command_runner.go:130] > # specified in the pause image. When commented out, it will fallback to the
	I0314 18:53:36.641438  976444 command_runner.go:130] > # default: "/pause". This option supports live configuration reload.
	I0314 18:53:36.641444  976444 command_runner.go:130] > # pause_command = "/pause"
	I0314 18:53:36.641456  976444 command_runner.go:130] > # List of images to be excluded from the kubelet's garbage collection.
	I0314 18:53:36.641467  976444 command_runner.go:130] > # It allows specifying image names using either exact, glob, or keyword
	I0314 18:53:36.641476  976444 command_runner.go:130] > # patterns. Exact matches must match the entire name, glob matches can
	I0314 18:53:36.641487  976444 command_runner.go:130] > # have a wildcard * at the end, and keyword matches can have wildcards
	I0314 18:53:36.641505  976444 command_runner.go:130] > # on both ends. By default, this list includes the "pause" image if
	I0314 18:53:36.641517  976444 command_runner.go:130] > # configured by the user, which is used as a placeholder in Kubernetes pods.
	I0314 18:53:36.641526  976444 command_runner.go:130] > # pinned_images = [
	I0314 18:53:36.641531  976444 command_runner.go:130] > # ]
	I0314 18:53:36.641542  976444 command_runner.go:130] > # Path to the file which decides what sort of policy we use when deciding
	I0314 18:53:36.641556  976444 command_runner.go:130] > # whether or not to trust an image that we've pulled. It is not recommended that
	I0314 18:53:36.641568  976444 command_runner.go:130] > # this option be used, as the default behavior of using the system-wide default
	I0314 18:53:36.641580  976444 command_runner.go:130] > # policy (i.e., /etc/containers/policy.json) is most often preferred. Please
	I0314 18:53:36.641590  976444 command_runner.go:130] > # refer to containers-policy.json(5) for more details.
	I0314 18:53:36.641599  976444 command_runner.go:130] > # signature_policy = ""
	I0314 18:53:36.641613  976444 command_runner.go:130] > # Root path for pod namespace-separated signature policies.
	I0314 18:53:36.641622  976444 command_runner.go:130] > # The final policy to be used on image pull will be <SIGNATURE_POLICY_DIR>/<NAMESPACE>.json.
	I0314 18:53:36.641628  976444 command_runner.go:130] > # If no pod namespace is being provided on image pull (via the sandbox config),
	I0314 18:53:36.641637  976444 command_runner.go:130] > # or the concatenated path is non existent, then the signature_policy or system
	I0314 18:53:36.641642  976444 command_runner.go:130] > # wide policy will be used as fallback. Must be an absolute path.
	I0314 18:53:36.641650  976444 command_runner.go:130] > # signature_policy_dir = "/etc/crio/policies"
	I0314 18:53:36.641655  976444 command_runner.go:130] > # List of registries to skip TLS verification for pulling images. Please
	I0314 18:53:36.641663  976444 command_runner.go:130] > # consider configuring the registries via /etc/containers/registries.conf before
	I0314 18:53:36.641667  976444 command_runner.go:130] > # changing them here.
	I0314 18:53:36.641673  976444 command_runner.go:130] > # insecure_registries = [
	I0314 18:53:36.641678  976444 command_runner.go:130] > # ]
	I0314 18:53:36.641690  976444 command_runner.go:130] > # Controls how image volumes are handled. The valid values are mkdir, bind and
	I0314 18:53:36.641706  976444 command_runner.go:130] > # ignore; the latter will ignore volumes entirely.
	I0314 18:53:36.641715  976444 command_runner.go:130] > # image_volumes = "mkdir"
	I0314 18:53:36.641722  976444 command_runner.go:130] > # Temporary directory to use for storing big files
	I0314 18:53:36.641732  976444 command_runner.go:130] > # big_files_temporary_dir = ""
	I0314 18:53:36.641741  976444 command_runner.go:130] > # The crio.network table containers settings pertaining to the management of
	I0314 18:53:36.641750  976444 command_runner.go:130] > # CNI plugins.
	I0314 18:53:36.641756  976444 command_runner.go:130] > [crio.network]
	I0314 18:53:36.641768  976444 command_runner.go:130] > # The default CNI network name to be selected. If not set or "", then
	I0314 18:53:36.641779  976444 command_runner.go:130] > # CRI-O will pick-up the first one found in network_dir.
	I0314 18:53:36.641788  976444 command_runner.go:130] > # cni_default_network = ""
	I0314 18:53:36.641800  976444 command_runner.go:130] > # Path to the directory where CNI configuration files are located.
	I0314 18:53:36.641807  976444 command_runner.go:130] > # network_dir = "/etc/cni/net.d/"
	I0314 18:53:36.641812  976444 command_runner.go:130] > # Paths to directories where CNI plugin binaries are located.
	I0314 18:53:36.641818  976444 command_runner.go:130] > # plugin_dirs = [
	I0314 18:53:36.641822  976444 command_runner.go:130] > # 	"/opt/cni/bin/",
	I0314 18:53:36.641825  976444 command_runner.go:130] > # ]
	I0314 18:53:36.641833  976444 command_runner.go:130] > # A necessary configuration for Prometheus based metrics retrieval
	I0314 18:53:36.641838  976444 command_runner.go:130] > [crio.metrics]
	I0314 18:53:36.641845  976444 command_runner.go:130] > # Globally enable or disable metrics support.
	I0314 18:53:36.641849  976444 command_runner.go:130] > enable_metrics = true
	I0314 18:53:36.641855  976444 command_runner.go:130] > # Specify enabled metrics collectors.
	I0314 18:53:36.641859  976444 command_runner.go:130] > # Per default all metrics are enabled.
	I0314 18:53:36.641867  976444 command_runner.go:130] > # It is possible, to prefix the metrics with "container_runtime_" and "crio_".
	I0314 18:53:36.641873  976444 command_runner.go:130] > # For example, the metrics collector "operations" would be treated in the same
	I0314 18:53:36.641887  976444 command_runner.go:130] > # way as "crio_operations" and "container_runtime_crio_operations".
	I0314 18:53:36.641901  976444 command_runner.go:130] > # metrics_collectors = [
	I0314 18:53:36.641907  976444 command_runner.go:130] > # 	"operations",
	I0314 18:53:36.641912  976444 command_runner.go:130] > # 	"operations_latency_microseconds_total",
	I0314 18:53:36.641918  976444 command_runner.go:130] > # 	"operations_latency_microseconds",
	I0314 18:53:36.641922  976444 command_runner.go:130] > # 	"operations_errors",
	I0314 18:53:36.641928  976444 command_runner.go:130] > # 	"image_pulls_by_digest",
	I0314 18:53:36.641932  976444 command_runner.go:130] > # 	"image_pulls_by_name",
	I0314 18:53:36.641936  976444 command_runner.go:130] > # 	"image_pulls_by_name_skipped",
	I0314 18:53:36.641940  976444 command_runner.go:130] > # 	"image_pulls_failures",
	I0314 18:53:36.641944  976444 command_runner.go:130] > # 	"image_pulls_successes",
	I0314 18:53:36.641950  976444 command_runner.go:130] > # 	"image_pulls_layer_size",
	I0314 18:53:36.641955  976444 command_runner.go:130] > # 	"image_layer_reuse",
	I0314 18:53:36.641965  976444 command_runner.go:130] > # 	"containers_events_dropped_total",
	I0314 18:53:36.641971  976444 command_runner.go:130] > # 	"containers_oom_total",
	I0314 18:53:36.641980  976444 command_runner.go:130] > # 	"containers_oom",
	I0314 18:53:36.641987  976444 command_runner.go:130] > # 	"processes_defunct",
	I0314 18:53:36.641996  976444 command_runner.go:130] > # 	"operations_total",
	I0314 18:53:36.642003  976444 command_runner.go:130] > # 	"operations_latency_seconds",
	I0314 18:53:36.642012  976444 command_runner.go:130] > # 	"operations_latency_seconds_total",
	I0314 18:53:36.642019  976444 command_runner.go:130] > # 	"operations_errors_total",
	I0314 18:53:36.642028  976444 command_runner.go:130] > # 	"image_pulls_bytes_total",
	I0314 18:53:36.642035  976444 command_runner.go:130] > # 	"image_pulls_skipped_bytes_total",
	I0314 18:53:36.642044  976444 command_runner.go:130] > # 	"image_pulls_failure_total",
	I0314 18:53:36.642051  976444 command_runner.go:130] > # 	"image_pulls_success_total",
	I0314 18:53:36.642060  976444 command_runner.go:130] > # 	"image_layer_reuse_total",
	I0314 18:53:36.642069  976444 command_runner.go:130] > # 	"containers_oom_count_total",
	I0314 18:53:36.642076  976444 command_runner.go:130] > # 	"containers_seccomp_notifier_count_total",
	I0314 18:53:36.642086  976444 command_runner.go:130] > # 	"resources_stalled_at_stage",
	I0314 18:53:36.642091  976444 command_runner.go:130] > # ]
	I0314 18:53:36.642100  976444 command_runner.go:130] > # The port on which the metrics server will listen.
	I0314 18:53:36.642108  976444 command_runner.go:130] > # metrics_port = 9090
	I0314 18:53:36.642113  976444 command_runner.go:130] > # Local socket path to bind the metrics server to
	I0314 18:53:36.642120  976444 command_runner.go:130] > # metrics_socket = ""
	I0314 18:53:36.642125  976444 command_runner.go:130] > # The certificate for the secure metrics server.
	I0314 18:53:36.642130  976444 command_runner.go:130] > # If the certificate is not available on disk, then CRI-O will generate a
	I0314 18:53:36.642148  976444 command_runner.go:130] > # self-signed one. CRI-O also watches for changes of this path and reloads the
	I0314 18:53:36.642155  976444 command_runner.go:130] > # certificate on any modification event.
	I0314 18:53:36.642159  976444 command_runner.go:130] > # metrics_cert = ""
	I0314 18:53:36.642166  976444 command_runner.go:130] > # The certificate key for the secure metrics server.
	I0314 18:53:36.642171  976444 command_runner.go:130] > # Behaves in the same way as the metrics_cert.
	I0314 18:53:36.642177  976444 command_runner.go:130] > # metrics_key = ""
	I0314 18:53:36.642182  976444 command_runner.go:130] > # A necessary configuration for OpenTelemetry trace data exporting
	I0314 18:53:36.642185  976444 command_runner.go:130] > [crio.tracing]
	I0314 18:53:36.642190  976444 command_runner.go:130] > # Globally enable or disable exporting OpenTelemetry traces.
	I0314 18:53:36.642197  976444 command_runner.go:130] > # enable_tracing = false
	I0314 18:53:36.642202  976444 command_runner.go:130] > # Address on which the gRPC trace collector listens on.
	I0314 18:53:36.642208  976444 command_runner.go:130] > # tracing_endpoint = "0.0.0.0:4317"
	I0314 18:53:36.642214  976444 command_runner.go:130] > # Number of samples to collect per million spans. Set to 1000000 to always sample.
	I0314 18:53:36.642219  976444 command_runner.go:130] > # tracing_sampling_rate_per_million = 0
	I0314 18:53:36.642226  976444 command_runner.go:130] > # CRI-O NRI configuration.
	I0314 18:53:36.642229  976444 command_runner.go:130] > [crio.nri]
	I0314 18:53:36.642236  976444 command_runner.go:130] > # Globally enable or disable NRI.
	I0314 18:53:36.642239  976444 command_runner.go:130] > # enable_nri = false
	I0314 18:53:36.642245  976444 command_runner.go:130] > # NRI socket to listen on.
	I0314 18:53:36.642249  976444 command_runner.go:130] > # nri_listen = "/var/run/nri/nri.sock"
	I0314 18:53:36.642254  976444 command_runner.go:130] > # NRI plugin directory to use.
	I0314 18:53:36.642259  976444 command_runner.go:130] > # nri_plugin_dir = "/opt/nri/plugins"
	I0314 18:53:36.642266  976444 command_runner.go:130] > # NRI plugin configuration directory to use.
	I0314 18:53:36.642270  976444 command_runner.go:130] > # nri_plugin_config_dir = "/etc/nri/conf.d"
	I0314 18:53:36.642278  976444 command_runner.go:130] > # Disable connections from externally launched NRI plugins.
	I0314 18:53:36.642282  976444 command_runner.go:130] > # nri_disable_connections = false
	I0314 18:53:36.642290  976444 command_runner.go:130] > # Timeout for a plugin to register itself with NRI.
	I0314 18:53:36.642294  976444 command_runner.go:130] > # nri_plugin_registration_timeout = "5s"
	I0314 18:53:36.642301  976444 command_runner.go:130] > # Timeout for a plugin to handle an NRI request.
	I0314 18:53:36.642305  976444 command_runner.go:130] > # nri_plugin_request_timeout = "2s"
	I0314 18:53:36.642311  976444 command_runner.go:130] > # Necessary information pertaining to container and pod stats reporting.
	I0314 18:53:36.642315  976444 command_runner.go:130] > [crio.stats]
	I0314 18:53:36.642323  976444 command_runner.go:130] > # The number of seconds between collecting pod and container stats.
	I0314 18:53:36.642328  976444 command_runner.go:130] > # If set to 0, the stats are collected on-demand instead.
	I0314 18:53:36.642334  976444 command_runner.go:130] > # stats_collection_period = 0
	I0314 18:53:36.642363  976444 command_runner.go:130] ! time="2024-03-14 18:53:36.593997107Z" level=info msg="Starting CRI-O, version: 1.29.1, git: unknown(clean)"
	I0314 18:53:36.642392  976444 command_runner.go:130] ! level=info msg="Using default capabilities: CAP_CHOWN, CAP_DAC_OVERRIDE, CAP_FSETID, CAP_FOWNER, CAP_SETGID, CAP_SETUID, CAP_SETPCAP, CAP_NET_BIND_SERVICE, CAP_KILL"
	I0314 18:53:36.642550  976444 cni.go:84] Creating CNI manager for ""
	I0314 18:53:36.642567  976444 cni.go:136] multinode detected (3 nodes found), recommending kindnet
	I0314 18:53:36.642579  976444 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 18:53:36.642600  976444 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.68 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:multinode-669543 NodeName:multinode-669543 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.68"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.68 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodPath:/et
c/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 18:53:36.642747  976444 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.68
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "multinode-669543"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.68
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.68"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 18:53:36.642817  976444 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 18:53:36.654534  976444 command_runner.go:130] > kubeadm
	I0314 18:53:36.654549  976444 command_runner.go:130] > kubectl
	I0314 18:53:36.654554  976444 command_runner.go:130] > kubelet
	I0314 18:53:36.654579  976444 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 18:53:36.654639  976444 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 18:53:36.665648  976444 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (315 bytes)
	I0314 18:53:36.683948  976444 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 18:53:36.701596  976444 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2157 bytes)
	I0314 18:53:36.720366  976444 ssh_runner.go:195] Run: grep 192.168.39.68	control-plane.minikube.internal$ /etc/hosts
	I0314 18:53:36.724918  976444 command_runner.go:130] > 192.168.39.68	control-plane.minikube.internal
	I0314 18:53:36.725001  976444 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 18:53:36.867575  976444 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 18:53:36.883121  976444 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543 for IP: 192.168.39.68
	I0314 18:53:36.883146  976444 certs.go:194] generating shared ca certs ...
	I0314 18:53:36.883169  976444 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 18:53:36.883375  976444 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 18:53:36.883432  976444 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 18:53:36.883447  976444 certs.go:256] generating profile certs ...
	I0314 18:53:36.883555  976444 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/client.key
	I0314 18:53:36.883634  976444 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/apiserver.key.b0a84d17
	I0314 18:53:36.883697  976444 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/proxy-client.key
	I0314 18:53:36.883713  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /var/lib/minikube/certs/ca.crt
	I0314 18:53:36.883731  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key -> /var/lib/minikube/certs/ca.key
	I0314 18:53:36.883749  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt -> /var/lib/minikube/certs/proxy-client-ca.crt
	I0314 18:53:36.883768  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key -> /var/lib/minikube/certs/proxy-client-ca.key
	I0314 18:53:36.883792  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/apiserver.crt -> /var/lib/minikube/certs/apiserver.crt
	I0314 18:53:36.883811  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/apiserver.key -> /var/lib/minikube/certs/apiserver.key
	I0314 18:53:36.883829  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/proxy-client.crt -> /var/lib/minikube/certs/proxy-client.crt
	I0314 18:53:36.883860  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/proxy-client.key -> /var/lib/minikube/certs/proxy-client.key
	I0314 18:53:36.883926  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 18:53:36.883964  976444 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 18:53:36.883978  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 18:53:36.884093  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 18:53:36.884171  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 18:53:36.884237  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 18:53:36.884312  976444 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 18:53:36.884358  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt -> /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:53:36.884381  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem -> /usr/share/ca-certificates/951311.pem
	I0314 18:53:36.884399  976444 vm_assets.go:163] NewFileAsset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> /usr/share/ca-certificates/9513112.pem
	I0314 18:53:36.884978  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 18:53:36.913755  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 18:53:36.940272  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 18:53:36.967729  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 18:53:36.994769  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 18:53:37.021693  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 18:53:37.047860  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 18:53:37.074012  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/multinode-669543/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 18:53:37.099866  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 18:53:37.126224  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 18:53:37.152647  976444 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 18:53:37.179632  976444 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 18:53:37.197700  976444 ssh_runner.go:195] Run: openssl version
	I0314 18:53:37.203789  976444 command_runner.go:130] > OpenSSL 1.1.1w  11 Sep 2023
	I0314 18:53:37.203863  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 18:53:37.215249  976444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:53:37.220324  976444 command_runner.go:130] > -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:53:37.220362  976444 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:53:37.220408  976444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 18:53:37.226451  976444 command_runner.go:130] > b5213941
	I0314 18:53:37.226498  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 18:53:37.236638  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 18:53:37.248291  976444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 18:53:37.253182  976444 command_runner.go:130] > -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:53:37.253324  976444 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 18:53:37.253375  976444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 18:53:37.259373  976444 command_runner.go:130] > 51391683
	I0314 18:53:37.259591  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 18:53:37.269523  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 18:53:37.281120  976444 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 18:53:37.285802  976444 command_runner.go:130] > -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:53:37.285927  976444 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 18:53:37.285981  976444 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 18:53:37.291769  976444 command_runner.go:130] > 3ec20f2e
	I0314 18:53:37.291820  976444 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 18:53:37.301535  976444 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:53:37.306366  976444 command_runner.go:130] >   File: /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 18:53:37.306387  976444 command_runner.go:130] >   Size: 1164      	Blocks: 8          IO Block: 4096   regular file
	I0314 18:53:37.306393  976444 command_runner.go:130] > Device: 253,1	Inode: 3150397     Links: 1
	I0314 18:53:37.306399  976444 command_runner.go:130] > Access: (0644/-rw-r--r--)  Uid: (    0/    root)   Gid: (    0/    root)
	I0314 18:53:37.306410  976444 command_runner.go:130] > Access: 2024-03-14 18:47:15.930651320 +0000
	I0314 18:53:37.306418  976444 command_runner.go:130] > Modify: 2024-03-14 18:47:15.930651320 +0000
	I0314 18:53:37.306423  976444 command_runner.go:130] > Change: 2024-03-14 18:47:15.930651320 +0000
	I0314 18:53:37.306430  976444 command_runner.go:130] >  Birth: 2024-03-14 18:47:15.930651320 +0000
	I0314 18:53:37.306469  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 18:53:37.312488  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.312551  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 18:53:37.318353  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.318560  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 18:53:37.324472  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.324546  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 18:53:37.330269  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.330458  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 18:53:37.336186  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.336529  976444 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 18:53:37.342172  976444 command_runner.go:130] > Certificate will not expire
	I0314 18:53:37.342467  976444 kubeadm.go:391] StartCluster: {Name:multinode-669543 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.
4 ClusterName:multinode-669543 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.68 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true} {Name:m02 IP:192.168.39.16 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true} {Name:m03 IP:192.168.39.89 Port:0 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:false Worker:true}] Addons:map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:false efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false in
spektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:false storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:true ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:fal
se DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:53:37.342581  976444 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 18:53:37.342634  976444 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 18:53:37.380367  976444 command_runner.go:130] > 218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214
	I0314 18:53:37.380395  976444 command_runner.go:130] > ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd
	I0314 18:53:37.380406  976444 command_runner.go:130] > 13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886
	I0314 18:53:37.380415  976444 command_runner.go:130] > 1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434
	I0314 18:53:37.380495  976444 command_runner.go:130] > 38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef
	I0314 18:53:37.380513  976444 command_runner.go:130] > b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9
	I0314 18:53:37.380532  976444 command_runner.go:130] > 35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2
	I0314 18:53:37.380626  976444 command_runner.go:130] > 8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6
	I0314 18:53:37.382000  976444 cri.go:89] found id: "218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214"
	I0314 18:53:37.382016  976444 cri.go:89] found id: "ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd"
	I0314 18:53:37.382022  976444 cri.go:89] found id: "13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886"
	I0314 18:53:37.382027  976444 cri.go:89] found id: "1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434"
	I0314 18:53:37.382032  976444 cri.go:89] found id: "38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef"
	I0314 18:53:37.382036  976444 cri.go:89] found id: "b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9"
	I0314 18:53:37.382040  976444 cri.go:89] found id: "35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2"
	I0314 18:53:37.382043  976444 cri.go:89] found id: "8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6"
	I0314 18:53:37.382047  976444 cri.go:89] found id: ""
	I0314 18:53:37.382094  976444 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 18:57:28 multinode-669543 crio[2871]: time="2024-03-14 18:57:28.924410125Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4fc56ef4-e6ec-4d0d-be19-0a4b1d94eebc name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:57:28 multinode-669543 crio[2871]: time="2024-03-14 18:57:28.924717581Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9e5cf013f1ceeeec08fa616e6b0de7f56c87a68d920c9f4982a31bd81d00cc6,PodSandboxId:e3eb8d64101e77db0524de8600058f0d427bef1dd80d224893ca5764391ea2fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710442458240382088,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ebafcfeb0b6dd205e2480d1f847777c8f9965e1d6aee0d9c3fd8f01b80f85,PodSandboxId:d1068f17bc4e14b95579ac61e765edc5d939773a6ae1a2e2f9d05bd0b1268778,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710442424593595683,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c482a385775ca36ae052983d2cc89c99fea3df611d55f304c9173949f2e5bf,PodSandboxId:64c87fe588bf8ad8eee061d825f6d6ce5efa40cdaca1dea321c69fe145d476e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710442424714048238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131b388ec4eb6fa6f24572857eafde3fc98b33a93fc3713b34f2a973166bd0ae,PodSandboxId:7c9f8981a0c43b613368cfe518ebb6315f1fdcacbc7f0c740b339c4eb259a789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710442424562296978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},A
nnotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97671595afeadf32c4ab5578309c623b4f31ff51c80296e5b4bc066b46d80517,PodSandboxId:3e99e94db5408ad8d482343e423f354a977dbbc3a8a8da061e164ba50d2db76b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710442424451646230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.k
ubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabf1dd85b17a993d661d0ddae99290aa9dcfa0228580d94765756c4f67e522e,PodSandboxId:6d354a5baa15a3946e848d3048520203bc7bb10c31b82c8642e78dd21fc3a8ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710442419807296820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2399815b362d2cd5661dc74ad85cc3a552c809a0873cfcd845251e52a7f67bbc,PodSandboxId:60e03a93ae441d04da0750963eb0b1de334dce8d781586ed010b702e40df3b58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710442419829580792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6aa8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd45f1769a65556145848d5f2e852287fb09ec6e09cebcf05fb181ea1c741d30,PodSandboxId:fe6d4a63116924891b7915b61e84d5342e6e8bfd35bd0dd272f8a3dda29c246c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710442419803732647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3bebef91bde0c9fc9055fae97404c7217d388defc5bede42abc3ecada69555,PodSandboxId:27231f52f0d520026560d7af10d929fb227d811317a509a5c0b73bfe36139d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710442419722688656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e15cccd80d48b27e1c5c838985e20b3132623b91268521c631a2983dd8d80cb,PodSandboxId:5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710442109278870608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214,PodSandboxId:b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710442063352535638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd,PodSandboxId:c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710442063292147663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},Annotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886,PodSandboxId:9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710442061564875884,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434,PodSandboxId:a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710442060049101968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.kubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9,PodSandboxId:375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710442039636411679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef,PodSandboxId:c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710442039661252184,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6a
a8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6,PodSandboxId:e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710442039628259186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,
},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2,PodSandboxId:6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710442039632490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4fc56ef4-e6ec-4d0d-be19-0a4b1d94eebc name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:57:28 multinode-669543 crio[2871]: time="2024-03-14 18:57:28.967362427Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=09e975f2-edc1-43a8-8788-a6575a7959ab name=/runtime.v1.RuntimeService/Version
	Mar 14 18:57:28 multinode-669543 crio[2871]: time="2024-03-14 18:57:28.967438806Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=09e975f2-edc1-43a8-8788-a6575a7959ab name=/runtime.v1.RuntimeService/Version
	Mar 14 18:57:28 multinode-669543 crio[2871]: time="2024-03-14 18:57:28.968345730Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=d7810e40-5a8c-4653-8941-560bc9cffc1d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:57:28 multinode-669543 crio[2871]: time="2024-03-14 18:57:28.968867114Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710442648968845311,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=d7810e40-5a8c-4653-8941-560bc9cffc1d name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:57:28 multinode-669543 crio[2871]: time="2024-03-14 18:57:28.969303975Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4cbca606-b6be-4ca8-ba59-fcde9674dc31 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:57:28 multinode-669543 crio[2871]: time="2024-03-14 18:57:28.969383739Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4cbca606-b6be-4ca8-ba59-fcde9674dc31 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:57:28 multinode-669543 crio[2871]: time="2024-03-14 18:57:28.969815740Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9e5cf013f1ceeeec08fa616e6b0de7f56c87a68d920c9f4982a31bd81d00cc6,PodSandboxId:e3eb8d64101e77db0524de8600058f0d427bef1dd80d224893ca5764391ea2fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710442458240382088,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ebafcfeb0b6dd205e2480d1f847777c8f9965e1d6aee0d9c3fd8f01b80f85,PodSandboxId:d1068f17bc4e14b95579ac61e765edc5d939773a6ae1a2e2f9d05bd0b1268778,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710442424593595683,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c482a385775ca36ae052983d2cc89c99fea3df611d55f304c9173949f2e5bf,PodSandboxId:64c87fe588bf8ad8eee061d825f6d6ce5efa40cdaca1dea321c69fe145d476e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710442424714048238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131b388ec4eb6fa6f24572857eafde3fc98b33a93fc3713b34f2a973166bd0ae,PodSandboxId:7c9f8981a0c43b613368cfe518ebb6315f1fdcacbc7f0c740b339c4eb259a789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710442424562296978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},A
nnotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97671595afeadf32c4ab5578309c623b4f31ff51c80296e5b4bc066b46d80517,PodSandboxId:3e99e94db5408ad8d482343e423f354a977dbbc3a8a8da061e164ba50d2db76b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710442424451646230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.k
ubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabf1dd85b17a993d661d0ddae99290aa9dcfa0228580d94765756c4f67e522e,PodSandboxId:6d354a5baa15a3946e848d3048520203bc7bb10c31b82c8642e78dd21fc3a8ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710442419807296820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2399815b362d2cd5661dc74ad85cc3a552c809a0873cfcd845251e52a7f67bbc,PodSandboxId:60e03a93ae441d04da0750963eb0b1de334dce8d781586ed010b702e40df3b58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710442419829580792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6aa8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd45f1769a65556145848d5f2e852287fb09ec6e09cebcf05fb181ea1c741d30,PodSandboxId:fe6d4a63116924891b7915b61e84d5342e6e8bfd35bd0dd272f8a3dda29c246c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710442419803732647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3bebef91bde0c9fc9055fae97404c7217d388defc5bede42abc3ecada69555,PodSandboxId:27231f52f0d520026560d7af10d929fb227d811317a509a5c0b73bfe36139d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710442419722688656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e15cccd80d48b27e1c5c838985e20b3132623b91268521c631a2983dd8d80cb,PodSandboxId:5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710442109278870608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214,PodSandboxId:b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710442063352535638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd,PodSandboxId:c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710442063292147663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},Annotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886,PodSandboxId:9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710442061564875884,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434,PodSandboxId:a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710442060049101968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.kubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9,PodSandboxId:375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710442039636411679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef,PodSandboxId:c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710442039661252184,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6a
a8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6,PodSandboxId:e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710442039628259186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,
},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2,PodSandboxId:6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710442039632490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4cbca606-b6be-4ca8-ba59-fcde9674dc31 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.015032640Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=74199ed3-0942-48cf-94b9-3a9bf2424f92 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.015103968Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=74199ed3-0942-48cf-94b9-3a9bf2424f92 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.016280937Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=114e0d83-584e-4e40-b5d1-86c46ef27b82 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.016672704Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710442649016650397,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=114e0d83-584e-4e40-b5d1-86c46ef27b82 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.017377923Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=3317d773-3cff-4f14-a101-07bdb16b9ec7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.017437396Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=3317d773-3cff-4f14-a101-07bdb16b9ec7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.017761831Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9e5cf013f1ceeeec08fa616e6b0de7f56c87a68d920c9f4982a31bd81d00cc6,PodSandboxId:e3eb8d64101e77db0524de8600058f0d427bef1dd80d224893ca5764391ea2fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710442458240382088,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ebafcfeb0b6dd205e2480d1f847777c8f9965e1d6aee0d9c3fd8f01b80f85,PodSandboxId:d1068f17bc4e14b95579ac61e765edc5d939773a6ae1a2e2f9d05bd0b1268778,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710442424593595683,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c482a385775ca36ae052983d2cc89c99fea3df611d55f304c9173949f2e5bf,PodSandboxId:64c87fe588bf8ad8eee061d825f6d6ce5efa40cdaca1dea321c69fe145d476e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710442424714048238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131b388ec4eb6fa6f24572857eafde3fc98b33a93fc3713b34f2a973166bd0ae,PodSandboxId:7c9f8981a0c43b613368cfe518ebb6315f1fdcacbc7f0c740b339c4eb259a789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710442424562296978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},A
nnotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97671595afeadf32c4ab5578309c623b4f31ff51c80296e5b4bc066b46d80517,PodSandboxId:3e99e94db5408ad8d482343e423f354a977dbbc3a8a8da061e164ba50d2db76b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710442424451646230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.k
ubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabf1dd85b17a993d661d0ddae99290aa9dcfa0228580d94765756c4f67e522e,PodSandboxId:6d354a5baa15a3946e848d3048520203bc7bb10c31b82c8642e78dd21fc3a8ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710442419807296820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2399815b362d2cd5661dc74ad85cc3a552c809a0873cfcd845251e52a7f67bbc,PodSandboxId:60e03a93ae441d04da0750963eb0b1de334dce8d781586ed010b702e40df3b58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710442419829580792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6aa8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd45f1769a65556145848d5f2e852287fb09ec6e09cebcf05fb181ea1c741d30,PodSandboxId:fe6d4a63116924891b7915b61e84d5342e6e8bfd35bd0dd272f8a3dda29c246c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710442419803732647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3bebef91bde0c9fc9055fae97404c7217d388defc5bede42abc3ecada69555,PodSandboxId:27231f52f0d520026560d7af10d929fb227d811317a509a5c0b73bfe36139d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710442419722688656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e15cccd80d48b27e1c5c838985e20b3132623b91268521c631a2983dd8d80cb,PodSandboxId:5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710442109278870608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214,PodSandboxId:b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710442063352535638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd,PodSandboxId:c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710442063292147663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},Annotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886,PodSandboxId:9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710442061564875884,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434,PodSandboxId:a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710442060049101968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.kubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9,PodSandboxId:375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710442039636411679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef,PodSandboxId:c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710442039661252184,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6a
a8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6,PodSandboxId:e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710442039628259186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,
},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2,PodSandboxId:6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710442039632490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=3317d773-3cff-4f14-a101-07bdb16b9ec7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.056752667Z" level=debug msg="Request: &StatusRequest{Verbose:false,}" file="otel-collector/interceptors.go:62" id=6194a82a-0c28-4ab7-9ce7-7c5dfeca390e name=/runtime.v1.RuntimeService/Status
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.056824431Z" level=debug msg="Response: &StatusResponse{Status:&RuntimeStatus{Conditions:[]*RuntimeCondition{&RuntimeCondition{Type:RuntimeReady,Status:true,Reason:,Message:,},&RuntimeCondition{Type:NetworkReady,Status:true,Reason:,Message:,},},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=6194a82a-0c28-4ab7-9ce7-7c5dfeca390e name=/runtime.v1.RuntimeService/Status
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.063178735Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=1c66e75b-0787-4444-bf39-f968adb1b3b0 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.063236220Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=1c66e75b-0787-4444-bf39-f968adb1b3b0 name=/runtime.v1.RuntimeService/Version
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.065243133Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2568944b-1fcf-4413-9196-1813471a86a8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.065665173Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710442649065645483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:134903,},InodesUsed:&UInt64Value{Value:67,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2568944b-1fcf-4413-9196-1813471a86a8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.066373525Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=8083311a-dd28-40cf-9e0f-a821f526eb9c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.066456523Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=8083311a-dd28-40cf-9e0f-a821f526eb9c name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 18:57:29 multinode-669543 crio[2871]: time="2024-03-14 18:57:29.066898682Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:c9e5cf013f1ceeeec08fa616e6b0de7f56c87a68d920c9f4982a31bd81d00cc6,PodSandboxId:e3eb8d64101e77db0524de8600058f0d427bef1dd80d224893ca5764391ea2fe,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_RUNNING,CreatedAt:1710442458240382088,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessag
ePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:c99ebafcfeb0b6dd205e2480d1f847777c8f9965e1d6aee0d9c3fd8f01b80f85,PodSandboxId:d1068f17bc4e14b95579ac61e765edc5d939773a6ae1a2e2f9d05bd0b1268778,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:1,},Image:&ImageSpec{Image:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_RUNNING,CreatedAt:1710442424593595683,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kub
ernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:25c482a385775ca36ae052983d2cc89c99fea3df611d55f304c9173949f2e5bf,PodSandboxId:64c87fe588bf8ad8eee061d825f6d6ce5efa40cdaca1dea321c69fe145d476e5,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710442424714048238,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol
\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:131b388ec4eb6fa6f24572857eafde3fc98b33a93fc3713b34f2a973166bd0ae,PodSandboxId:7c9f8981a0c43b613368cfe518ebb6315f1fdcacbc7f0c740b339c4eb259a789,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710442424562296978,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},A
nnotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:97671595afeadf32c4ab5578309c623b4f31ff51c80296e5b4bc066b46d80517,PodSandboxId:3e99e94db5408ad8d482343e423f354a977dbbc3a8a8da061e164ba50d2db76b,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:1710442424451646230,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.k
ubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dabf1dd85b17a993d661d0ddae99290aa9dcfa0228580d94765756c4f67e522e,PodSandboxId:6d354a5baa15a3946e848d3048520203bc7bb10c31b82c8642e78dd21fc3a8ae,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710442419807296820,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:map[string]string{io.kubernetes.conta
iner.hash: e1639c7a,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2399815b362d2cd5661dc74ad85cc3a552c809a0873cfcd845251e52a7f67bbc,PodSandboxId:60e03a93ae441d04da0750963eb0b1de334dce8d781586ed010b702e40df3b58,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710442419829580792,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6aa8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.
restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:fd45f1769a65556145848d5f2e852287fb09ec6e09cebcf05fb181ea1c741d30,PodSandboxId:fe6d4a63116924891b7915b61e84d5342e6e8bfd35bd0dd272f8a3dda29c246c,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710442419803732647,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount:
1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7b3bebef91bde0c9fc9055fae97404c7217d388defc5bede42abc3ecada69555,PodSandboxId:27231f52f0d520026560d7af10d929fb227d811317a509a5c0b73bfe36139d81,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710442419722688656,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.r
estartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:2e15cccd80d48b27e1c5c838985e20b3132623b91268521c631a2983dd8d80cb,PodSandboxId:5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88,Metadata:&ContainerMetadata{Name:busybox,Attempt:0,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a,State:CONTAINER_EXITED,CreatedAt:1710442109278870608,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox-5b5d89c9d6-wdd4q,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 2e875679-7a0f-4c0e-bb89-61d1d25322b9,},Annotations:map[string]string{io.kubernetes.container.hash: be68507b,io.kubernetes.container.re
startCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214,PodSandboxId:b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_EXITED,CreatedAt:1710442063352535638,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-z2ssg,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 36fffd3c-245b-4633-96d8-3c1fc216830c,},Annotations:map[string]string{io.kubernetes.container.hash: f6d321e8,io.kubernetes.container.ports: [{\"name\":\"dns\",\"contai
nerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ede9eb36f24d11da78c8c0df30071a8ebedb8bdc3e8fea859927287fc0e5d0fd,PodSandboxId:c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710442063292147663,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod
.namespace: kube-system,io.kubernetes.pod.uid: a06c8b3d-ca1e-491c-bb77-ce60da9c5f96,},Annotations:map[string]string{io.kubernetes.container.hash: 288e1062,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886,PodSandboxId:9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4,Metadata:&ContainerMetadata{Name:kindnet-cni,Attempt:0,},Image:&ImageSpec{Image:docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5,State:CONTAINER_EXITED,CreatedAt:1710442061564875884,Labels:map[string]string{io.kubernetes.container.name: kindnet-cni,io.kubernetes.pod.name: kindnet-j8rsz,io.kubernetes.pod.
namespace: kube-system,io.kubernetes.pod.uid: d74393fb-ce21-43b4-9400-960184cbe665,},Annotations:map[string]string{io.kubernetes.container.hash: 234405fb,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434,PodSandboxId:a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_EXITED,CreatedAt:1710442060049101968,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-gv9z7,io.kubernetes.pod.namespace: kube-system,io.kubernet
es.pod.uid: ba3b7b3c-c295-4e3f-98f3-66278b5bf7d6,},Annotations:map[string]string{io.kubernetes.container.hash: f6f274a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9,PodSandboxId:375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:0,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_EXITED,CreatedAt:1710442039636411679,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-multinode-669543,io.kubernetes.pod.namespace: kube-sys
tem,io.kubernetes.pod.uid: acac00fe4893897b1bd8f4bb5003ed66,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef,PodSandboxId:c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838,Metadata:&ContainerMetadata{Name:etcd,Attempt:0,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_EXITED,CreatedAt:1710442039661252184,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9bfe873d8c178b7d6a
a8ac90faa3a096,},Annotations:map[string]string{io.kubernetes.container.hash: 42c7fa83,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6,PodSandboxId:e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:0,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_EXITED,CreatedAt:1710442039628259186,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 5a0929f2d9c353ee576b697cb4a8fdc9,
},Annotations:map[string]string{io.kubernetes.container.hash: 3ca5ccb6,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2,PodSandboxId:6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:0,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_EXITED,CreatedAt:1710442039632490257,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-multinode-669543,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cd1dfb67d1de36699e8c1e198b392ffb,},Annotations:m
ap[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=8083311a-dd28-40cf-9e0f-a821f526eb9c name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	c9e5cf013f1ce       8c811b4aec35f259572d0f79207bc0678df4c736eeec50bc9fec37ed936a472a                                      3 minutes ago       Running             busybox                   1                   e3eb8d64101e7       busybox-5b5d89c9d6-wdd4q
	25c482a385775       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      3 minutes ago       Running             coredns                   1                   64c87fe588bf8       coredns-5dd5756b68-z2ssg
	c99ebafcfeb0b       4950bb10b3f87e8d4a8f772a0d8934625cac4ccfa3675fea34cad0dab83fd5a5                                      3 minutes ago       Running             kindnet-cni               1                   d1068f17bc4e1       kindnet-j8rsz
	131b388ec4eb6       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      3 minutes ago       Running             storage-provisioner       1                   7c9f8981a0c43       storage-provisioner
	97671595afead       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      3 minutes ago       Running             kube-proxy                1                   3e99e94db5408       kube-proxy-gv9z7
	2399815b362d2       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      3 minutes ago       Running             etcd                      1                   60e03a93ae441       etcd-multinode-669543
	dabf1dd85b17a       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      3 minutes ago       Running             kube-scheduler            1                   6d354a5baa15a       kube-scheduler-multinode-669543
	fd45f1769a655       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      3 minutes ago       Running             kube-apiserver            1                   fe6d4a6311692       kube-apiserver-multinode-669543
	7b3bebef91bde       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      3 minutes ago       Running             kube-controller-manager   1                   27231f52f0d52       kube-controller-manager-multinode-669543
	2e15cccd80d48       gcr.io/k8s-minikube/busybox@sha256:74f634b1bc1bd74535d5209589734efbd44a25f4e2dc96d78784576a3eb5b335   8 minutes ago       Exited              busybox                   0                   5ff3ff89264fe       busybox-5b5d89c9d6-wdd4q
	218bd1408ce44       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc                                      9 minutes ago       Exited              coredns                   0                   b65e68430d817       coredns-5dd5756b68-z2ssg
	ede9eb36f24d1       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      9 minutes ago       Exited              storage-provisioner       0                   c44d003130ba6       storage-provisioner
	13b4bebfdcd4e       docker.io/kindest/kindnetd@sha256:61f9956af8019caf6dcc4d39b31857b868aaab80521432ddcc216b805c4f7988    9 minutes ago       Exited              kindnet-cni               0                   9b9c5bc67c578       kindnet-j8rsz
	1a160e30f8660       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e                                      9 minutes ago       Exited              kube-proxy                0                   a78eb4281690b       kube-proxy-gv9z7
	38944d713ba28       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9                                      10 minutes ago      Exited              etcd                      0                   c80d82d6e24a9       etcd-multinode-669543
	b44b3dc852c67       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591                                      10 minutes ago      Exited              kube-controller-manager   0                   375c781f82868       kube-controller-manager-multinode-669543
	35d37a68a0eec       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1                                      10 minutes ago      Exited              kube-scheduler            0                   6f734f5704330       kube-scheduler-multinode-669543
	8703d57d41951       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257                                      10 minutes ago      Exited              kube-apiserver            0                   e87a510584e32       kube-apiserver-multinode-669543
	
	
	==> coredns [218bd1408ce44026a06dc17fe2da54db8fab1f453a2793efa1f537de49659214] <==
	[INFO] 10.244.0.3:37224 - 3 "AAAA IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.001467997s
	[INFO] 10.244.0.3:44702 - 4 "AAAA IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000037412s
	[INFO] 10.244.0.3:47299 - 5 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000034441s
	[INFO] 10.244.0.3:43385 - 6 "A IN kubernetes.default. udp 36 false 512" NXDOMAIN qr,rd,ra 36 0.00080483s
	[INFO] 10.244.0.3:48298 - 7 "A IN kubernetes.default.default.svc.cluster.local. udp 62 false 512" NXDOMAIN qr,aa,rd 155 0.000030046s
	[INFO] 10.244.0.3:46943 - 8 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000026659s
	[INFO] 10.244.0.3:49889 - 9 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000138651s
	[INFO] 10.244.1.2:57501 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000129654s
	[INFO] 10.244.1.2:47443 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000167448s
	[INFO] 10.244.1.2:57696 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000156261s
	[INFO] 10.244.1.2:51402 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.00008625s
	[INFO] 10.244.0.3:34523 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000100518s
	[INFO] 10.244.0.3:40478 - 3 "AAAA IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 147 0.000205966s
	[INFO] 10.244.0.3:54659 - 4 "A IN kubernetes.default.svc.cluster.local. udp 54 false 512" NOERROR qr,aa,rd 106 0.000114635s
	[INFO] 10.244.0.3:60980 - 5 "PTR IN 1.0.96.10.in-addr.arpa. udp 40 false 512" NOERROR qr,aa,rd 112 0.000075611s
	[INFO] 10.244.1.2:33839 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.00014272s
	[INFO] 10.244.1.2:58938 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000235074s
	[INFO] 10.244.1.2:60502 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00013409s
	[INFO] 10.244.1.2:44031 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000218951s
	[INFO] 10.244.0.3:43462 - 2 "PTR IN 10.0.96.10.in-addr.arpa. udp 41 false 512" NOERROR qr,aa,rd 116 0.000128452s
	[INFO] 10.244.0.3:58131 - 3 "AAAA IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 40 0.000097696s
	[INFO] 10.244.0.3:60406 - 4 "A IN host.minikube.internal. udp 40 false 512" NOERROR qr,aa,rd 78 0.00010639s
	[INFO] 10.244.0.3:47260 - 5 "PTR IN 1.39.168.192.in-addr.arpa. udp 43 false 512" NOERROR qr,aa,rd 104 0.000085097s
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [25c482a385775ca36ae052983d2cc89c99fea3df611d55f304c9173949f2e5bf] <==
	[INFO] plugin/ready: Still waiting on: "kubernetes"
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	[INFO] 127.0.0.1:34243 - 23074 "HINFO IN 3443610798101173519.8912060676849101262. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.015786475s
	
	
	==> describe nodes <==
	Name:               multinode-669543
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-669543
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-669543
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T18_47_26_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:47:22 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-669543
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:57:28 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 18:53:43 +0000   Thu, 14 Mar 2024 18:47:20 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 18:53:43 +0000   Thu, 14 Mar 2024 18:47:20 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 18:53:43 +0000   Thu, 14 Mar 2024 18:47:20 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 18:53:43 +0000   Thu, 14 Mar 2024 18:47:42 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.68
	  Hostname:    multinode-669543
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 542fcf9277304e138330d4f556f68ad2
	  System UUID:                542fcf92-7730-4e13-8330-d4f556f68ad2
	  Boot ID:                    b3c9cb6d-9323-4693-b839-d5ce5214638a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-wdd4q                    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m2s
	  kube-system                 coredns-5dd5756b68-z2ssg                    100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m51s
	  kube-system                 etcd-multinode-669543                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         10m
	  kube-system                 kindnet-j8rsz                               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m51s
	  kube-system                 kube-apiserver-multinode-669543             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-controller-manager-multinode-669543    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 kube-proxy-gv9z7                            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m51s
	  kube-system                 kube-scheduler-multinode-669543             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         10m
	  kube-system                 storage-provisioner                         0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m49s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   100m (5%!)(MISSING)
	  memory             220Mi (10%!)(MISSING)  220Mi (10%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m48s                  kube-proxy       
	  Normal  Starting                 3m44s                  kube-proxy       
	  Normal  NodeAllocatableEnforced  10m                    kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  10m                    kubelet          Node multinode-669543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    10m                    kubelet          Node multinode-669543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     10m                    kubelet          Node multinode-669543 status is now: NodeHasSufficientPID
	  Normal  Starting                 10m                    kubelet          Starting kubelet.
	  Normal  RegisteredNode           9m52s                  node-controller  Node multinode-669543 event: Registered Node multinode-669543 in Controller
	  Normal  NodeReady                9m47s                  kubelet          Node multinode-669543 status is now: NodeReady
	  Normal  Starting                 3m51s                  kubelet          Starting kubelet.
	  Normal  NodeAllocatableEnforced  3m51s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  3m50s (x8 over 3m51s)  kubelet          Node multinode-669543 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m50s (x8 over 3m51s)  kubelet          Node multinode-669543 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m50s (x7 over 3m51s)  kubelet          Node multinode-669543 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           3m33s                  node-controller  Node multinode-669543 event: Registered Node multinode-669543 in Controller
	
	
	Name:               multinode-669543-m02
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=multinode-669543-m02
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=multinode-669543
	                    minikube.k8s.io/primary=false
	                    minikube.k8s.io/updated_at=2024_03_14T18_54_27_0700
	                    minikube.k8s.io/version=v1.32.0
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 18:54:27 +0000
	Taints:             node.kubernetes.io/unreachable:NoExecute
	                    node.kubernetes.io/unreachable:NoSchedule
	Unschedulable:      false
	Lease:
	  HolderIdentity:  multinode-669543-m02
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 18:55:08 +0000
	Conditions:
	  Type             Status    LastHeartbeatTime                 LastTransitionTime                Reason              Message
	  ----             ------    -----------------                 ------------------                ------              -------
	  MemoryPressure   Unknown   Thu, 14 Mar 2024 18:54:57 +0000   Thu, 14 Mar 2024 18:55:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  DiskPressure     Unknown   Thu, 14 Mar 2024 18:54:57 +0000   Thu, 14 Mar 2024 18:55:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  PIDPressure      Unknown   Thu, 14 Mar 2024 18:54:57 +0000   Thu, 14 Mar 2024 18:55:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	  Ready            Unknown   Thu, 14 Mar 2024 18:54:57 +0000   Thu, 14 Mar 2024 18:55:51 +0000   NodeStatusUnknown   Kubelet stopped posting node status.
	Addresses:
	  InternalIP:  192.168.39.16
	  Hostname:    multinode-669543-m02
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 701481c581d0418e8c30473e0e0e1f20
	  System UUID:                701481c5-81d0-418e-8c30-473e0e0e1f20
	  Boot ID:                    f70ac467-a980-478a-a70f-2b859eb40567
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.1.0/24
	PodCIDRs:                     10.244.1.0/24
	Non-terminated Pods:          (3 in total)
	  Namespace                   Name                        CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                        ------------  ----------  ---------------  -------------  ---
	  default                     busybox-5b5d89c9d6-hgm7c    0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         3m8s
	  kube-system                 kindnet-fjd7q               100m (5%!)(MISSING)     100m (5%!)(MISSING)   50Mi (2%!)(MISSING)        50Mi (2%!)(MISSING)      9m12s
	  kube-system                 kube-proxy-r4pb9            0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m12s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests   Limits
	  --------           --------   ------
	  cpu                100m (5%!)(MISSING)  100m (5%!)(MISSING)
	  memory             50Mi (2%!)(MISSING)  50Mi (2%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)     0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)     0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m7s                   kube-proxy       
	  Normal  Starting                 2m57s                  kube-proxy       
	  Normal  NodeHasSufficientMemory  9m12s (x5 over 9m13s)  kubelet          Node multinode-669543-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m12s (x5 over 9m13s)  kubelet          Node multinode-669543-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m12s (x5 over 9m13s)  kubelet          Node multinode-669543-m02 status is now: NodeHasSufficientPID
	  Normal  NodeReady                9m4s                   kubelet          Node multinode-669543-m02 status is now: NodeReady
	  Normal  NodeHasSufficientMemory  3m2s (x5 over 3m4s)    kubelet          Node multinode-669543-m02 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    3m2s (x5 over 3m4s)    kubelet          Node multinode-669543-m02 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     3m2s (x5 over 3m4s)    kubelet          Node multinode-669543-m02 status is now: NodeHasSufficientPID
	  Normal  RegisteredNode           2m58s                  node-controller  Node multinode-669543-m02 event: Registered Node multinode-669543-m02 in Controller
	  Normal  NodeReady                2m55s                  kubelet          Node multinode-669543-m02 status is now: NodeReady
	  Normal  NodeNotReady             98s                    node-controller  Node multinode-669543-m02 status is now: NodeNotReady
	
	
	==> dmesg <==
	[  +0.062893] systemd-fstab-generator[604]: Ignoring "noauto" option for root device
	[  +0.182414] systemd-fstab-generator[618]: Ignoring "noauto" option for root device
	[  +0.136892] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +0.257960] systemd-fstab-generator[656]: Ignoring "noauto" option for root device
	[  +5.164998] systemd-fstab-generator[755]: Ignoring "noauto" option for root device
	[  +0.064438] kauditd_printk_skb: 130 callbacks suppressed
	[  +4.153420] systemd-fstab-generator[938]: Ignoring "noauto" option for root device
	[  +1.037506] kauditd_printk_skb: 57 callbacks suppressed
	[  +5.729969] systemd-fstab-generator[1271]: Ignoring "noauto" option for root device
	[  +0.082774] kauditd_printk_skb: 30 callbacks suppressed
	[ +13.729109] systemd-fstab-generator[1469]: Ignoring "noauto" option for root device
	[  +0.117670] kauditd_printk_skb: 21 callbacks suppressed
	[  +5.603216] kauditd_printk_skb: 82 callbacks suppressed
	[Mar14 18:53] systemd-fstab-generator[2793]: Ignoring "noauto" option for root device
	[  +0.160648] systemd-fstab-generator[2805]: Ignoring "noauto" option for root device
	[  +0.176099] systemd-fstab-generator[2820]: Ignoring "noauto" option for root device
	[  +0.154255] systemd-fstab-generator[2832]: Ignoring "noauto" option for root device
	[  +0.261656] systemd-fstab-generator[2856]: Ignoring "noauto" option for root device
	[  +7.405848] systemd-fstab-generator[2958]: Ignoring "noauto" option for root device
	[  +0.087595] kauditd_printk_skb: 100 callbacks suppressed
	[  +1.723058] systemd-fstab-generator[3081]: Ignoring "noauto" option for root device
	[  +5.724329] kauditd_printk_skb: 74 callbacks suppressed
	[ +12.161927] kauditd_printk_skb: 32 callbacks suppressed
	[  +3.592284] systemd-fstab-generator[3915]: Ignoring "noauto" option for root device
	[Mar14 18:54] kauditd_printk_skb: 14 callbacks suppressed
	
	
	==> etcd [2399815b362d2cd5661dc74ad85cc3a552c809a0873cfcd845251e52a7f67bbc] <==
	{"level":"info","ts":"2024-03-14T18:53:40.506528Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T18:53:40.506538Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T18:53:40.506855Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 switched to configuration voters=(9375015013596480675)"}
	{"level":"info","ts":"2024-03-14T18:53:40.507007Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"68cd46418ae274f9","local-member-id":"821abe7be15f44a3","added-peer-id":"821abe7be15f44a3","added-peer-peer-urls":["https://192.168.39.68:2380"]}
	{"level":"info","ts":"2024-03-14T18:53:40.507128Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68cd46418ae274f9","local-member-id":"821abe7be15f44a3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:53:40.507181Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:53:40.54436Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T18:53:40.544631Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"821abe7be15f44a3","initial-advertise-peer-urls":["https://192.168.39.68:2380"],"listen-peer-urls":["https://192.168.39.68:2380"],"advertise-client-urls":["https://192.168.39.68:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.39.68:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T18:53:40.544689Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T18:53:40.544812Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-14T18:53:40.544845Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-14T18:53:41.700143Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T18:53:41.700211Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T18:53:41.700228Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 received MsgPreVoteResp from 821abe7be15f44a3 at term 2"}
	{"level":"info","ts":"2024-03-14T18:53:41.700239Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T18:53:41.700245Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 received MsgVoteResp from 821abe7be15f44a3 at term 3"}
	{"level":"info","ts":"2024-03-14T18:53:41.700253Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became leader at term 3"}
	{"level":"info","ts":"2024-03-14T18:53:41.70026Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 821abe7be15f44a3 elected leader 821abe7be15f44a3 at term 3"}
	{"level":"info","ts":"2024-03-14T18:53:41.705887Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"821abe7be15f44a3","local-member-attributes":"{Name:multinode-669543 ClientURLs:[https://192.168.39.68:2379]}","request-path":"/0/members/821abe7be15f44a3/attributes","cluster-id":"68cd46418ae274f9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T18:53:41.7059Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:53:41.706198Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:53:41.707622Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.68:2379"}
	{"level":"info","ts":"2024-03-14T18:53:41.70764Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T18:53:41.707773Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T18:53:41.707811Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> etcd [38944d713ba28514a0b520f7b04871642d807aa4deef15cd20d9bee4f6a4d9ef] <==
	{"level":"info","ts":"2024-03-14T18:47:20.4721Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became candidate at term 2"}
	{"level":"info","ts":"2024-03-14T18:47:20.472105Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 received MsgVoteResp from 821abe7be15f44a3 at term 2"}
	{"level":"info","ts":"2024-03-14T18:47:20.472113Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"821abe7be15f44a3 became leader at term 2"}
	{"level":"info","ts":"2024-03-14T18:47:20.472121Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 821abe7be15f44a3 elected leader 821abe7be15f44a3 at term 2"}
	{"level":"info","ts":"2024-03-14T18:47:20.477027Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:47:20.480525Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"821abe7be15f44a3","local-member-attributes":"{Name:multinode-669543 ClientURLs:[https://192.168.39.68:2379]}","request-path":"/0/members/821abe7be15f44a3/attributes","cluster-id":"68cd46418ae274f9","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T18:47:20.480581Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:47:20.481719Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T18:47:20.481791Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T18:47:20.487574Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.68:2379"}
	{"level":"info","ts":"2024-03-14T18:47:20.492037Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T18:47:20.492155Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T18:47:20.49721Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"68cd46418ae274f9","local-member-id":"821abe7be15f44a3","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:47:20.497451Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:47:20.4976Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T18:51:57.209849Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-14T18:51:57.21489Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"multinode-669543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.68:2380"],"advertise-client-urls":["https://192.168.39.68:2379"]}
	{"level":"warn","ts":"2024-03-14T18:51:57.222407Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T18:51:57.222605Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T18:51:57.306684Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T18:51:57.306755Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.39.68:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-14T18:51:57.308437Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"821abe7be15f44a3","current-leader-member-id":"821abe7be15f44a3"}
	{"level":"info","ts":"2024-03-14T18:51:57.311165Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-14T18:51:57.311398Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.39.68:2380"}
	{"level":"info","ts":"2024-03-14T18:51:57.311441Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"multinode-669543","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.39.68:2380"],"advertise-client-urls":["https://192.168.39.68:2379"]}
	
	
	==> kernel <==
	 18:57:29 up 10 min,  0 users,  load average: 0.01, 0.13, 0.11
	Linux multinode-669543 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kindnet [13b4bebfdcd4e5fe8ce6751710cdab85910020403dab5806446c9127c70f6886] <==
	I0314 18:51:12.583838       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	I0314 18:51:22.590868       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:51:22.590996       1 main.go:227] handling current node
	I0314 18:51:22.591028       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:51:22.591035       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:51:22.591197       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:51:22.591227       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	I0314 18:51:32.604525       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:51:32.604672       1 main.go:227] handling current node
	I0314 18:51:32.604759       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:51:32.604831       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:51:32.605093       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:51:32.605155       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	I0314 18:51:42.618795       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:51:42.618888       1 main.go:227] handling current node
	I0314 18:51:42.618997       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:51:42.619028       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:51:42.623034       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:51:42.623092       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	I0314 18:51:52.630046       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:51:52.630105       1 main.go:227] handling current node
	I0314 18:51:52.630115       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:51:52.630121       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:51:52.630237       1 main.go:223] Handling node with IPs: map[192.168.39.89:{}]
	I0314 18:51:52.630271       1 main.go:250] Node multinode-669543-m03 has CIDR [10.244.3.0/24] 
	
	
	==> kindnet [c99ebafcfeb0b6dd205e2480d1f847777c8f9965e1d6aee0d9c3fd8f01b80f85] <==
	I0314 18:56:25.892680       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:56:35.899879       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:56:35.900026       1 main.go:227] handling current node
	I0314 18:56:35.900038       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:56:35.900045       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:56:45.960147       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:56:45.960243       1 main.go:227] handling current node
	I0314 18:56:45.960274       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:56:45.960293       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:56:55.965284       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:56:55.965373       1 main.go:227] handling current node
	I0314 18:56:55.965396       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:56:55.965414       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:57:05.980470       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:57:05.980558       1 main.go:227] handling current node
	I0314 18:57:05.980580       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:57:05.980597       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:57:15.985881       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:57:15.985989       1 main.go:227] handling current node
	I0314 18:57:15.985998       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:57:15.986005       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	I0314 18:57:26.000364       1 main.go:223] Handling node with IPs: map[192.168.39.68:{}]
	I0314 18:57:26.000428       1 main.go:227] handling current node
	I0314 18:57:26.000458       1 main.go:223] Handling node with IPs: map[192.168.39.16:{}]
	I0314 18:57:26.000464       1 main.go:250] Node multinode-669543-m02 has CIDR [10.244.1.0/24] 
	
	
	==> kube-apiserver [8703d57d41951f0cee529405c1d972b4f076de21d9bd251ffa6096f9be8e89f6] <==
	I0314 18:47:22.366829       1 controller.go:624] quota admission added evaluator for: namespaces
	I0314 18:47:22.371656       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 18:47:22.386354       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 18:47:22.386407       1 aggregator.go:166] initial CRD sync complete...
	I0314 18:47:22.386440       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 18:47:22.386463       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 18:47:22.386485       1 cache.go:39] Caches are synced for autoregister controller
	I0314 18:47:22.405900       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 18:47:22.415663       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 18:47:23.265444       1 storage_scheduling.go:95] created PriorityClass system-node-critical with value 2000001000
	I0314 18:47:23.270234       1 storage_scheduling.go:95] created PriorityClass system-cluster-critical with value 2000000000
	I0314 18:47:23.270271       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 18:47:23.873525       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 18:47:23.917574       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 18:47:24.010181       1 alloc.go:330] "allocated clusterIPs" service="default/kubernetes" clusterIPs={"IPv4":"10.96.0.1"}
	W0314 18:47:24.021016       1 lease.go:263] Resetting endpoints for master service "kubernetes" to [192.168.39.68]
	I0314 18:47:24.024801       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 18:47:24.036640       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 18:47:24.336474       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 18:47:25.350055       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 18:47:25.364502       1 alloc.go:330] "allocated clusterIPs" service="kube-system/kube-dns" clusterIPs={"IPv4":"10.96.0.10"}
	I0314 18:47:25.381164       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 18:47:38.554988       1 controller.go:624] quota admission added evaluator for: replicasets.apps
	I0314 18:47:38.604244       1 controller.go:624] quota admission added evaluator for: controllerrevisions.apps
	I0314 18:51:57.214350       1 controller.go:128] Shutting down kubernetes service endpoint reconciler
	
	
	==> kube-apiserver [fd45f1769a65556145848d5f2e852287fb09ec6e09cebcf05fb181ea1c741d30] <==
	I0314 18:53:43.125278       1 dynamic_serving_content.go:132] "Starting controller" name="aggregator-proxy-cert::/var/lib/minikube/certs/front-proxy-client.crt::/var/lib/minikube/certs/front-proxy-client.key"
	I0314 18:53:43.125563       1 apiservice_controller.go:97] Starting APIServiceRegistrationController
	I0314 18:53:43.125604       1 cache.go:32] Waiting for caches to sync for APIServiceRegistrationController controller
	I0314 18:53:43.125654       1 apf_controller.go:372] Starting API Priority and Fairness config controller
	I0314 18:53:43.215530       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 18:53:43.217308       1 shared_informer.go:318] Caches are synced for crd-autoregister
	I0314 18:53:43.217609       1 aggregator.go:166] initial CRD sync complete...
	I0314 18:53:43.217656       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 18:53:43.217662       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 18:53:43.218092       1 cache.go:39] Caches are synced for autoregister controller
	I0314 18:53:43.231882       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 18:53:43.236731       1 apf_controller.go:377] Running API Priority and Fairness config worker
	I0314 18:53:43.236774       1 apf_controller.go:380] Running API Priority and Fairness periodic rebalancing process
	I0314 18:53:43.241157       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 18:53:43.247454       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 18:53:43.295011       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 18:53:43.307080       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 18:53:44.118327       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	I0314 18:53:45.893257       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 18:53:46.018309       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 18:53:46.026517       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 18:53:46.099454       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 18:53:46.109448       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	I0314 18:53:56.337123       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 18:53:56.430790       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	
	
	==> kube-controller-manager [7b3bebef91bde0c9fc9055fae97404c7217d388defc5bede42abc3ecada69555] <==
	I0314 18:54:34.818142       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:54:34.841018       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="144.146µs"
	I0314 18:54:34.855480       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="38.644µs"
	I0314 18:54:36.178382       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="5.084713ms"
	I0314 18:54:36.179322       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="33.817µs"
	I0314 18:54:36.413602       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-hgm7c" fieldPath="" kind="Pod" apiVersion="" type="Normal" reason="TaintManagerEviction" message="Cancelling deletion of Pod default/busybox-5b5d89c9d6-hgm7c"
	I0314 18:54:54.217150       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:54:56.417056       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-669543-m03 event: Removing Node multinode-669543-m03 from Controller"
	I0314 18:54:56.570204       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:54:56.571142       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-669543-m03\" does not exist"
	I0314 18:54:56.603000       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-669543-m03" podCIDRs=["10.244.2.0/24"]
	I0314 18:55:01.418598       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-669543-m03 event: Registered Node multinode-669543-m03 in Controller"
	I0314 18:55:01.899243       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:55:07.747644       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:55:11.437829       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-669543-m03 event: Removing Node multinode-669543-m03 from Controller"
	I0314 18:55:36.387266       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kindnet-xcn2w"
	I0314 18:55:36.410459       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kindnet-xcn2w"
	I0314 18:55:36.410600       1 gc_controller.go:344] "PodGC is force deleting Pod" pod="kube-system/kube-proxy-vjshs"
	I0314 18:55:36.436487       1 gc_controller.go:260] "Forced deletion of orphaned Pod succeeded" pod="kube-system/kube-proxy-vjshs"
	I0314 18:55:51.458477       1 event.go:307] "Event occurred" object="multinode-669543-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-669543-m02 status is now: NodeNotReady"
	I0314 18:55:51.473898       1 event.go:307] "Event occurred" object="kube-system/kindnet-fjd7q" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:55:51.491259       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-r4pb9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:55:51.510306       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-hgm7c" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:55:51.530287       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="18.816822ms"
	I0314 18:55:51.531002       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="267.59µs"
	
	
	==> kube-controller-manager [b44b3dc852c6729eb674a973e656eb4098c377671edb5d36a4d33307676ba9f9] <==
	I0314 18:49:02.705291       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:49:02.705403       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-669543-m03\" does not exist"
	I0314 18:49:02.733807       1 event.go:307] "Event occurred" object="kube-system/kindnet" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kindnet-xcn2w"
	I0314 18:49:02.733860       1 event.go:307] "Event occurred" object="kube-system/kube-proxy" fieldPath="" kind="DaemonSet" apiVersion="apps/v1" type="Normal" reason="SuccessfulCreate" message="Created pod: kube-proxy-vjshs"
	I0314 18:49:02.740557       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-669543-m03" podCIDRs=["10.244.2.0/24"]
	I0314 18:49:02.949852       1 node_lifecycle_controller.go:877] "Missing timestamp for Node. Assuming now as a timestamp" node="multinode-669543-m03"
	I0314 18:49:02.950153       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-669543-m03 event: Registered Node multinode-669543-m03 in Controller"
	I0314 18:49:09.848898       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:49:41.447980       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:49:42.974586       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RemovingNode" message="Node multinode-669543-m03 event: Removing Node multinode-669543-m03 from Controller"
	I0314 18:49:44.120329       1 actual_state_of_world.go:547] "Failed to update statusUpdateNeeded field in actual state of world" err="Failed to set statusUpdateNeeded to needed true, because nodeName=\"multinode-669543-m03\" does not exist"
	I0314 18:49:44.124836       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:49:44.147091       1 range_allocator.go:380] "Set node PodCIDR" node="multinode-669543-m03" podCIDRs=["10.244.3.0/24"]
	I0314 18:49:47.975765       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="RegisteredNode" message="Node multinode-669543-m03 event: Registered Node multinode-669543-m03 in Controller"
	I0314 18:49:50.725132       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:50:33.008824       1 event.go:307] "Event occurred" object="multinode-669543-m03" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-669543-m03 status is now: NodeNotReady"
	I0314 18:50:33.008896       1 topologycache.go:237] "Can't get CPU or zone information for node" node="multinode-669543-m02"
	I0314 18:50:33.013630       1 event.go:307] "Event occurred" object="multinode-669543-m02" fieldPath="" kind="Node" apiVersion="v1" type="Normal" reason="NodeNotReady" message="Node multinode-669543-m02 status is now: NodeNotReady"
	I0314 18:50:33.023667       1 event.go:307] "Event occurred" object="kube-system/kindnet-xcn2w" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:50:33.026247       1 event.go:307] "Event occurred" object="kube-system/kindnet-fjd7q" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:50:33.037234       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-vjshs" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:50:33.043673       1 event.go:307] "Event occurred" object="kube-system/kube-proxy-r4pb9" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:50:33.060886       1 event.go:307] "Event occurred" object="default/busybox-5b5d89c9d6-nslm6" fieldPath="" kind="Pod" apiVersion="v1" type="Warning" reason="NodeNotReady" message="Node is not ready"
	I0314 18:50:33.072773       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="12.02508ms"
	I0314 18:50:33.073223       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="default/busybox-5b5d89c9d6" duration="85.502µs"
	
	
	==> kube-proxy [1a160e30f8660cca64e8632284e0c6272160b350417acb1dc6d83a888d856434] <==
	I0314 18:47:40.337555       1 server_others.go:69] "Using iptables proxy"
	I0314 18:47:40.357136       1 node.go:141] Successfully retrieved node IP: 192.168.39.68
	I0314 18:47:40.425312       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:47:40.425331       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:47:40.428840       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:47:40.429363       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:47:40.429532       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:47:40.429662       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:47:40.434626       1 config.go:188] "Starting service config controller"
	I0314 18:47:40.434677       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:47:40.434712       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:47:40.434728       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:47:40.435404       1 config.go:315] "Starting node config controller"
	I0314 18:47:40.435440       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:47:40.535470       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 18:47:40.535521       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:47:40.535530       1 shared_informer.go:318] Caches are synced for service config
	
	
	==> kube-proxy [97671595afeadf32c4ab5578309c623b4f31ff51c80296e5b4bc066b46d80517] <==
	I0314 18:53:44.853685       1 server_others.go:69] "Using iptables proxy"
	I0314 18:53:44.889462       1 node.go:141] Successfully retrieved node IP: 192.168.39.68
	I0314 18:53:45.020189       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 18:53:45.020213       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 18:53:45.029023       1 server_others.go:152] "Using iptables Proxier"
	I0314 18:53:45.029079       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 18:53:45.029303       1 server.go:846] "Version info" version="v1.28.4"
	I0314 18:53:45.029314       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:53:45.030769       1 config.go:188] "Starting service config controller"
	I0314 18:53:45.030777       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 18:53:45.030805       1 config.go:97] "Starting endpoint slice config controller"
	I0314 18:53:45.030808       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 18:53:45.031221       1 config.go:315] "Starting node config controller"
	I0314 18:53:45.031227       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 18:53:45.131252       1 shared_informer.go:318] Caches are synced for node config
	I0314 18:53:45.131549       1 shared_informer.go:318] Caches are synced for service config
	I0314 18:53:45.131765       1 shared_informer.go:318] Caches are synced for endpoint slice config
	
	
	==> kube-scheduler [35d37a68a0eec3ddbc161b0cc203058594c36caf432f394ecab0728b3881a5e2] <==
	E0314 18:47:22.430869       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 18:47:22.430900       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 18:47:22.431296       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 18:47:22.431120       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 18:47:22.431409       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:47:23.337394       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 18:47:23.337450       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 18:47:23.402062       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 18:47:23.402118       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 18:47:23.417805       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 18:47:23.418232       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 18:47:23.422148       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 18:47:23.422239       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 18:47:23.450690       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 18:47:23.450760       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 18:47:23.523831       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 18:47:23.524092       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 18:47:23.558399       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 18:47:23.558559       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 18:47:23.623673       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 18:47:23.623811       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	I0314 18:47:25.706218       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 18:51:57.228640       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0314 18:51:57.228887       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	E0314 18:51:57.231778       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kube-scheduler [dabf1dd85b17a993d661d0ddae99290aa9dcfa0228580d94765756c4f67e522e] <==
	I0314 18:53:41.329066       1 serving.go:348] Generated self-signed cert in-memory
	I0314 18:53:43.267292       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.28.4"
	I0314 18:53:43.267394       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 18:53:43.278281       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 18:53:43.279150       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 18:53:43.279195       1 requestheader_controller.go:169] Starting RequestHeaderAuthRequestController
	I0314 18:53:43.279295       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 18:53:43.293238       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 18:53:43.279304       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file"
	I0314 18:53:43.293292       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0314 18:53:43.294557       1 shared_informer.go:311] Waiting for caches to sync for RequestHeaderAuthRequestController
	I0314 18:53:43.394528       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 18:53:43.394528       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::requestheader-client-ca-file
	I0314 18:53:43.395047       1 shared_informer.go:318] Caches are synced for RequestHeaderAuthRequestController
	
	
	==> kubelet <==
	Mar 14 18:55:38 multinode-669543 kubelet[3088]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:55:38 multinode-669543 kubelet[3088]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:55:38 multinode-669543 kubelet[3088]: E0314 18:55:38.973505    3088 manager.go:1106] Failed to create existing container: /kubepods/besteffort/poda06c8b3d-ca1e-491c-bb77-ce60da9c5f96/crio-c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9: Error finding container c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9: Status 404 returned error can't find the container with id c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9
	Mar 14 18:55:38 multinode-669543 kubelet[3088]: E0314 18:55:38.973965    3088 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod2e875679-7a0f-4c0e-bb89-61d1d25322b9/crio-5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88: Error finding container 5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88: Status 404 returned error can't find the container with id 5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88
	Mar 14 18:55:38 multinode-669543 kubelet[3088]: E0314 18:55:38.974435    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/podcd1dfb67d1de36699e8c1e198b392ffb/crio-6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6: Error finding container 6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6: Status 404 returned error can't find the container with id 6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6
	Mar 14 18:55:38 multinode-669543 kubelet[3088]: E0314 18:55:38.974655    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod9bfe873d8c178b7d6aa8ac90faa3a096/crio-c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838: Error finding container c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838: Status 404 returned error can't find the container with id c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838
	Mar 14 18:55:38 multinode-669543 kubelet[3088]: E0314 18:55:38.974797    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod5a0929f2d9c353ee576b697cb4a8fdc9/crio-e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84: Error finding container e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84: Status 404 returned error can't find the container with id e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84
	Mar 14 18:55:38 multinode-669543 kubelet[3088]: E0314 18:55:38.975287    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod36fffd3c-245b-4633-96d8-3c1fc216830c/crio-b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b: Error finding container b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b: Status 404 returned error can't find the container with id b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b
	Mar 14 18:55:38 multinode-669543 kubelet[3088]: E0314 18:55:38.975747    3088 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podba3b7b3c-c295-4e3f-98f3-66278b5bf7d6/crio-a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0: Error finding container a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0: Status 404 returned error can't find the container with id a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0
	Mar 14 18:55:38 multinode-669543 kubelet[3088]: E0314 18:55:38.976130    3088 manager.go:1106] Failed to create existing container: /kubepods/podd74393fb-ce21-43b4-9400-960184cbe665/crio-9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4: Error finding container 9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4: Status 404 returned error can't find the container with id 9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4
	Mar 14 18:55:38 multinode-669543 kubelet[3088]: E0314 18:55:38.976299    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/podacac00fe4893897b1bd8f4bb5003ed66/crio-375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03: Error finding container 375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03: Status 404 returned error can't find the container with id 375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03
	Mar 14 18:56:38 multinode-669543 kubelet[3088]: E0314 18:56:38.934646    3088 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 18:56:38 multinode-669543 kubelet[3088]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 18:56:38 multinode-669543 kubelet[3088]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 18:56:38 multinode-669543 kubelet[3088]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 18:56:38 multinode-669543 kubelet[3088]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 18:56:38 multinode-669543 kubelet[3088]: E0314 18:56:38.973283    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod9bfe873d8c178b7d6aa8ac90faa3a096/crio-c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838: Error finding container c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838: Status 404 returned error can't find the container with id c80d82d6e24a9fcb1b77e2c94b304c08547121ddfe33c8cccb29e4d6b4d84838
	Mar 14 18:56:38 multinode-669543 kubelet[3088]: E0314 18:56:38.973666    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/podcd1dfb67d1de36699e8c1e198b392ffb/crio-6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6: Error finding container 6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6: Status 404 returned error can't find the container with id 6f734f5704330ad78c44bc08bea5e66d5f1f164e7af2c735a8a6fea71e49b8d6
	Mar 14 18:56:38 multinode-669543 kubelet[3088]: E0314 18:56:38.974131    3088 manager.go:1106] Failed to create existing container: /kubepods/besteffort/podba3b7b3c-c295-4e3f-98f3-66278b5bf7d6/crio-a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0: Error finding container a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0: Status 404 returned error can't find the container with id a78eb4281690b37b5a8205d35b65e5c35a1a2c30606b9d90f5036f667862b4a0
	Mar 14 18:56:38 multinode-669543 kubelet[3088]: E0314 18:56:38.974477    3088 manager.go:1106] Failed to create existing container: /kubepods/besteffort/poda06c8b3d-ca1e-491c-bb77-ce60da9c5f96/crio-c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9: Error finding container c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9: Status 404 returned error can't find the container with id c44d003130ba6c9716afc33fad5098a84c193de1d5a0f36ddbdfec1388c175c9
	Mar 14 18:56:38 multinode-669543 kubelet[3088]: E0314 18:56:38.974848    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod5a0929f2d9c353ee576b697cb4a8fdc9/crio-e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84: Error finding container e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84: Status 404 returned error can't find the container with id e87a510584e329fd9a5ebaa146d8b26596c1775014533bf89c3100f23b20bd84
	Mar 14 18:56:38 multinode-669543 kubelet[3088]: E0314 18:56:38.975241    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/pod36fffd3c-245b-4633-96d8-3c1fc216830c/crio-b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b: Error finding container b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b: Status 404 returned error can't find the container with id b65e68430d8177d49105104be3af73a871829400635fe62764ef851f1a445d6b
	Mar 14 18:56:38 multinode-669543 kubelet[3088]: E0314 18:56:38.975550    3088 manager.go:1106] Failed to create existing container: /kubepods/burstable/podacac00fe4893897b1bd8f4bb5003ed66/crio-375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03: Error finding container 375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03: Status 404 returned error can't find the container with id 375c781f82868fa4d151ede792a3b61236f7b561963585069387f0fa8ac52e03
	Mar 14 18:56:38 multinode-669543 kubelet[3088]: E0314 18:56:38.976112    3088 manager.go:1106] Failed to create existing container: /kubepods/besteffort/pod2e875679-7a0f-4c0e-bb89-61d1d25322b9/crio-5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88: Error finding container 5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88: Status 404 returned error can't find the container with id 5ff3ff89264fe465b15128168448a0452458f01e89531d676a32b4a3d92abb88
	Mar 14 18:56:38 multinode-669543 kubelet[3088]: E0314 18:56:38.976389    3088 manager.go:1106] Failed to create existing container: /kubepods/podd74393fb-ce21-43b4-9400-960184cbe665/crio-9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4: Error finding container 9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4: Status 404 returned error can't find the container with id 9b9c5bc67c578e3f5b5d6b2b932c682b1bf15ae69b8bd34fce141861074e0eb4
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 18:57:28.629977  977920 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18384-942544/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p multinode-669543 -n multinode-669543
helpers_test.go:261: (dbg) Run:  kubectl --context multinode-669543 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestMultiNode/serial/StopMultiNode FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
--- FAIL: TestMultiNode/serial/StopMultiNode (141.50s)

                                                
                                    
x
+
TestPreload (270.92s)

                                                
                                                
=== RUN   TestPreload
preload_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p test-preload-273910 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4
E0314 19:02:14.528602  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 19:02:14.853745  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
preload_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p test-preload-273910 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.24.4: (2m9.467432748s)
preload_test.go:52: (dbg) Run:  out/minikube-linux-amd64 -p test-preload-273910 image pull gcr.io/k8s-minikube/busybox
preload_test.go:52: (dbg) Done: out/minikube-linux-amd64 -p test-preload-273910 image pull gcr.io/k8s-minikube/busybox: (1.185935339s)
preload_test.go:58: (dbg) Run:  out/minikube-linux-amd64 stop -p test-preload-273910
preload_test.go:58: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p test-preload-273910: exit status 82 (2m0.498379479s)

                                                
                                                
-- stdout --
	* Stopping node "test-preload-273910"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
preload_test.go:60: out/minikube-linux-amd64 stop -p test-preload-273910 failed: exit status 82
panic.go:626: *** TestPreload FAILED at 2024-03-14 19:05:26.794442853 +0000 UTC m=+3653.280301731
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-273910 -n test-preload-273910
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p test-preload-273910 -n test-preload-273910: exit status 3 (18.61138088s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:05:45.400519  980323 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.133:22: connect: no route to host
	E0314 19:05:45.400538  980323 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.133:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "test-preload-273910" host is not running, skipping log retrieval (state="Error")
helpers_test.go:175: Cleaning up "test-preload-273910" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p test-preload-273910
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p test-preload-273910: (1.155727449s)
--- FAIL: TestPreload (270.92s)

                                                
                                    
x
+
TestKubernetesUpgrade (469.68s)

                                                
                                                
=== RUN   TestKubernetesUpgrade
=== PAUSE TestKubernetesUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestKubernetesUpgrade
version_upgrade_test.go:222: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-097195 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:222: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-097195 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109 (5m0.359835684s)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-097195] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "kubernetes-upgrade-097195" primary control-plane node in "kubernetes-upgrade-097195" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 19:07:44.160322  981274 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:07:44.160448  981274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:07:44.160461  981274 out.go:304] Setting ErrFile to fd 2...
	I0314 19:07:44.160466  981274 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:07:44.160672  981274 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:07:44.161244  981274 out.go:298] Setting JSON to false
	I0314 19:07:44.162227  981274 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":96616,"bootTime":1710346648,"procs":181,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:07:44.162298  981274 start.go:139] virtualization: kvm guest
	I0314 19:07:44.165649  981274 out.go:177] * [kubernetes-upgrade-097195] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:07:44.168268  981274 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:07:44.167059  981274 notify.go:220] Checking for updates...
	I0314 19:07:44.171007  981274 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:07:44.172928  981274 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:07:44.174309  981274 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:07:44.175990  981274 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:07:44.177573  981274 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:07:44.179033  981274 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:07:44.223341  981274 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 19:07:44.225315  981274 start.go:297] selected driver: kvm2
	I0314 19:07:44.225332  981274 start.go:901] validating driver "kvm2" against <nil>
	I0314 19:07:44.225345  981274 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:07:44.226133  981274 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:07:44.240895  981274 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:07:44.257149  981274 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:07:44.257204  981274 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 19:07:44.257456  981274 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 19:07:44.257491  981274 cni.go:84] Creating CNI manager for ""
	I0314 19:07:44.257506  981274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:07:44.257516  981274 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 19:07:44.257590  981274 start.go:340] cluster config:
	{Name:kubernetes-upgrade-097195 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-097195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.
local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:07:44.257711  981274 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:07:44.259317  981274 out.go:177] * Starting "kubernetes-upgrade-097195" primary control-plane node in "kubernetes-upgrade-097195" cluster
	I0314 19:07:44.260546  981274 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:07:44.260607  981274 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 19:07:44.260677  981274 cache.go:56] Caching tarball of preloaded images
	I0314 19:07:44.260797  981274 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:07:44.260811  981274 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 19:07:44.261216  981274 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/config.json ...
	I0314 19:07:44.261249  981274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/config.json: {Name:mk706db7d16f1d2f1cb70f2324b1d41c4f4cf146 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:07:44.261401  981274 start.go:360] acquireMachinesLock for kubernetes-upgrade-097195: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:08:08.174040  981274 start.go:364] duration metric: took 23.912595278s to acquireMachinesLock for "kubernetes-upgrade-097195"
	I0314 19:08:08.174112  981274 start.go:93] Provisioning new machine with config: &{Name:kubernetes-upgrade-097195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesC
onfig:{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-097195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableO
ptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:08:08.174239  981274 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 19:08:08.176379  981274 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 19:08:08.176618  981274 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:08:08.176672  981274 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:08:08.194341  981274 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42549
	I0314 19:08:08.194752  981274 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:08:08.195454  981274 main.go:141] libmachine: Using API Version  1
	I0314 19:08:08.195470  981274 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:08:08.195815  981274 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:08:08.196029  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetMachineName
	I0314 19:08:08.196195  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:08:08.196383  981274 start.go:159] libmachine.API.Create for "kubernetes-upgrade-097195" (driver="kvm2")
	I0314 19:08:08.196416  981274 client.go:168] LocalClient.Create starting
	I0314 19:08:08.196450  981274 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 19:08:08.196499  981274 main.go:141] libmachine: Decoding PEM data...
	I0314 19:08:08.196526  981274 main.go:141] libmachine: Parsing certificate...
	I0314 19:08:08.196597  981274 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 19:08:08.196623  981274 main.go:141] libmachine: Decoding PEM data...
	I0314 19:08:08.196644  981274 main.go:141] libmachine: Parsing certificate...
	I0314 19:08:08.196673  981274 main.go:141] libmachine: Running pre-create checks...
	I0314 19:08:08.196691  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .PreCreateCheck
	I0314 19:08:08.197078  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetConfigRaw
	I0314 19:08:08.197594  981274 main.go:141] libmachine: Creating machine...
	I0314 19:08:08.197613  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .Create
	I0314 19:08:08.197764  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Creating KVM machine...
	I0314 19:08:08.198933  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found existing default KVM network
	I0314 19:08:08.199809  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:08.199668  981584 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:02:8d:57} reservation:<nil>}
	I0314 19:08:08.200577  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:08.200467  981584 network.go:206] using free private subnet 192.168.50.0/24: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000254320}
	I0314 19:08:08.200603  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | created network xml: 
	I0314 19:08:08.200620  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | <network>
	I0314 19:08:08.200639  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG |   <name>mk-kubernetes-upgrade-097195</name>
	I0314 19:08:08.200650  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG |   <dns enable='no'/>
	I0314 19:08:08.200671  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG |   
	I0314 19:08:08.200682  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG |   <ip address='192.168.50.1' netmask='255.255.255.0'>
	I0314 19:08:08.200696  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG |     <dhcp>
	I0314 19:08:08.200706  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG |       <range start='192.168.50.2' end='192.168.50.253'/>
	I0314 19:08:08.200718  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG |     </dhcp>
	I0314 19:08:08.200747  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG |   </ip>
	I0314 19:08:08.200773  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG |   
	I0314 19:08:08.200815  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | </network>
	I0314 19:08:08.200838  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | 
	I0314 19:08:08.205686  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | trying to create private KVM network mk-kubernetes-upgrade-097195 192.168.50.0/24...
	I0314 19:08:08.280104  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | private KVM network mk-kubernetes-upgrade-097195 192.168.50.0/24 created
	I0314 19:08:08.280145  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195 ...
	I0314 19:08:08.280161  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:08.280034  981584 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:08:08.280178  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 19:08:08.280225  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 19:08:08.525553  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:08.525438  981584 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa...
	I0314 19:08:08.611701  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:08.611545  981584 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/kubernetes-upgrade-097195.rawdisk...
	I0314 19:08:08.611778  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Writing magic tar header
	I0314 19:08:08.611799  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195 (perms=drwx------)
	I0314 19:08:08.611810  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Writing SSH key tar header
	I0314 19:08:08.611829  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:08.611655  981584 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195 ...
	I0314 19:08:08.611850  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195
	I0314 19:08:08.611869  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 19:08:08.611880  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:08:08.611895  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 19:08:08.611911  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 19:08:08.611928  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 19:08:08.611943  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 19:08:08.611957  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 19:08:08.611969  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Creating domain...
	I0314 19:08:08.611979  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 19:08:08.611991  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 19:08:08.612003  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Checking permissions on dir: /home/jenkins
	I0314 19:08:08.612018  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Checking permissions on dir: /home
	I0314 19:08:08.612028  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Skipping /home - not owner
	I0314 19:08:08.613081  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) define libvirt domain using xml: 
	I0314 19:08:08.613112  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) <domain type='kvm'>
	I0314 19:08:08.613147  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   <name>kubernetes-upgrade-097195</name>
	I0314 19:08:08.613175  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   <memory unit='MiB'>2200</memory>
	I0314 19:08:08.613196  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   <vcpu>2</vcpu>
	I0314 19:08:08.613207  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   <features>
	I0314 19:08:08.613218  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <acpi/>
	I0314 19:08:08.613230  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <apic/>
	I0314 19:08:08.613248  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <pae/>
	I0314 19:08:08.613264  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     
	I0314 19:08:08.613301  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   </features>
	I0314 19:08:08.613342  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   <cpu mode='host-passthrough'>
	I0314 19:08:08.613356  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   
	I0314 19:08:08.613367  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   </cpu>
	I0314 19:08:08.613378  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   <os>
	I0314 19:08:08.613390  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <type>hvm</type>
	I0314 19:08:08.613404  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <boot dev='cdrom'/>
	I0314 19:08:08.613413  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <boot dev='hd'/>
	I0314 19:08:08.613427  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <bootmenu enable='no'/>
	I0314 19:08:08.613440  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   </os>
	I0314 19:08:08.613453  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   <devices>
	I0314 19:08:08.613489  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <disk type='file' device='cdrom'>
	I0314 19:08:08.613507  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/boot2docker.iso'/>
	I0314 19:08:08.613525  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <target dev='hdc' bus='scsi'/>
	I0314 19:08:08.613539  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <readonly/>
	I0314 19:08:08.613555  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     </disk>
	I0314 19:08:08.613569  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <disk type='file' device='disk'>
	I0314 19:08:08.613583  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 19:08:08.613613  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/kubernetes-upgrade-097195.rawdisk'/>
	I0314 19:08:08.613629  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <target dev='hda' bus='virtio'/>
	I0314 19:08:08.613637  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     </disk>
	I0314 19:08:08.613653  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <interface type='network'>
	I0314 19:08:08.613666  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <source network='mk-kubernetes-upgrade-097195'/>
	I0314 19:08:08.613681  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <model type='virtio'/>
	I0314 19:08:08.613699  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     </interface>
	I0314 19:08:08.613714  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <interface type='network'>
	I0314 19:08:08.613727  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <source network='default'/>
	I0314 19:08:08.613738  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <model type='virtio'/>
	I0314 19:08:08.613751  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     </interface>
	I0314 19:08:08.613771  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <serial type='pty'>
	I0314 19:08:08.613788  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <target port='0'/>
	I0314 19:08:08.613803  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     </serial>
	I0314 19:08:08.613815  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <console type='pty'>
	I0314 19:08:08.613830  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <target type='serial' port='0'/>
	I0314 19:08:08.613842  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     </console>
	I0314 19:08:08.613871  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     <rng model='virtio'>
	I0314 19:08:08.613892  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)       <backend model='random'>/dev/random</backend>
	I0314 19:08:08.613903  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     </rng>
	I0314 19:08:08.613914  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     
	I0314 19:08:08.613923  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)     
	I0314 19:08:08.613933  981274 main.go:141] libmachine: (kubernetes-upgrade-097195)   </devices>
	I0314 19:08:08.613941  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) </domain>
	I0314 19:08:08.613951  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) 
	I0314 19:08:08.621211  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:55:4c:dd in network default
	I0314 19:08:08.621987  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Ensuring networks are active...
	I0314 19:08:08.622016  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:08.622830  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Ensuring network default is active
	I0314 19:08:08.623251  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Ensuring network mk-kubernetes-upgrade-097195 is active
	I0314 19:08:08.623968  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Getting domain xml...
	I0314 19:08:08.625018  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Creating domain...
	I0314 19:08:09.910039  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Waiting to get IP...
	I0314 19:08:09.910801  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:09.911329  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:09.911370  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:09.911268  981584 retry.go:31] will retry after 230.91ms: waiting for machine to come up
	I0314 19:08:10.144040  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:10.144521  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:10.144552  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:10.144465  981584 retry.go:31] will retry after 238.721717ms: waiting for machine to come up
	I0314 19:08:10.384878  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:10.385337  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:10.385372  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:10.385302  981584 retry.go:31] will retry after 385.827594ms: waiting for machine to come up
	I0314 19:08:10.773126  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:10.773825  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:10.773913  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:10.773763  981584 retry.go:31] will retry after 489.431831ms: waiting for machine to come up
	I0314 19:08:11.264692  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:11.265226  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:11.265262  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:11.265172  981584 retry.go:31] will retry after 590.346868ms: waiting for machine to come up
	I0314 19:08:11.857182  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:11.857641  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:11.857669  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:11.857594  981584 retry.go:31] will retry after 754.91311ms: waiting for machine to come up
	I0314 19:08:12.614751  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:12.615457  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:12.615489  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:12.615404  981584 retry.go:31] will retry after 1.056137778s: waiting for machine to come up
	I0314 19:08:13.673004  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:13.673496  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:13.673529  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:13.673441  981584 retry.go:31] will retry after 1.11462726s: waiting for machine to come up
	I0314 19:08:14.789629  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:14.790168  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:14.790198  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:14.790130  981584 retry.go:31] will retry after 1.38036302s: waiting for machine to come up
	I0314 19:08:16.172108  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:16.172678  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:16.172706  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:16.172610  981584 retry.go:31] will retry after 1.513337985s: waiting for machine to come up
	I0314 19:08:17.688457  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:17.688984  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:17.689020  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:17.688927  981584 retry.go:31] will retry after 2.085065326s: waiting for machine to come up
	I0314 19:08:19.776972  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:19.777488  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:19.777522  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:19.777439  981584 retry.go:31] will retry after 3.20860108s: waiting for machine to come up
	I0314 19:08:22.989547  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:22.990039  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:22.990076  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:22.989971  981584 retry.go:31] will retry after 3.95073634s: waiting for machine to come up
	I0314 19:08:26.945555  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:26.946065  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find current IP address of domain kubernetes-upgrade-097195 in network mk-kubernetes-upgrade-097195
	I0314 19:08:26.946094  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | I0314 19:08:26.946025  981584 retry.go:31] will retry after 4.283310523s: waiting for machine to come up
	I0314 19:08:31.233313  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:31.233929  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Found IP for machine: 192.168.50.124
	I0314 19:08:31.233957  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Reserving static IP address...
	I0314 19:08:31.233972  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has current primary IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:31.234277  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find host DHCP lease matching {name: "kubernetes-upgrade-097195", mac: "52:54:00:3b:7f:e0", ip: "192.168.50.124"} in network mk-kubernetes-upgrade-097195
	I0314 19:08:31.311593  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Reserved static IP address: 192.168.50.124
	I0314 19:08:31.311636  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Waiting for SSH to be available...
	I0314 19:08:31.311650  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Getting to WaitForSSH function...
	I0314 19:08:31.315690  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:31.316034  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195
	I0314 19:08:31.316053  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | unable to find defined IP address of network mk-kubernetes-upgrade-097195 interface with MAC address 52:54:00:3b:7f:e0
	I0314 19:08:31.316242  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Using SSH client type: external
	I0314 19:08:31.316282  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa (-rw-------)
	I0314 19:08:31.316317  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@ -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:08:31.316336  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | About to run SSH command:
	I0314 19:08:31.316354  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | exit 0
	I0314 19:08:31.319907  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | SSH cmd err, output: exit status 255: 
	I0314 19:08:31.319971  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Error getting ssh command 'exit 0' : ssh command error:
	I0314 19:08:31.319989  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | command : exit 0
	I0314 19:08:31.319998  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | err     : exit status 255
	I0314 19:08:31.320014  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | output  : 
	I0314 19:08:34.320776  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Getting to WaitForSSH function...
	I0314 19:08:34.323573  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.324079  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:34.324102  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.324285  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Using SSH client type: external
	I0314 19:08:34.324308  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa (-rw-------)
	I0314 19:08:34.324844  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.124 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:08:34.324889  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | About to run SSH command:
	I0314 19:08:34.324907  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | exit 0
	I0314 19:08:34.452108  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | SSH cmd err, output: <nil>: 
	I0314 19:08:34.452350  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) KVM machine creation complete!
	I0314 19:08:34.452660  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetConfigRaw
	I0314 19:08:34.453229  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:08:34.453424  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:08:34.453599  981274 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 19:08:34.453618  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetState
	I0314 19:08:34.454906  981274 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 19:08:34.454923  981274 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 19:08:34.454931  981274 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 19:08:34.454940  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:08:34.457177  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.457535  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:34.457565  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.457770  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:08:34.457964  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:34.458172  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:34.458314  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:08:34.458486  981274 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:34.458723  981274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0314 19:08:34.458735  981274 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 19:08:34.571873  981274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:08:34.571897  981274 main.go:141] libmachine: Detecting the provisioner...
	I0314 19:08:34.571905  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:08:34.574655  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.575039  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:34.575076  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.575234  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:08:34.575464  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:34.575614  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:34.575761  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:08:34.576003  981274 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:34.576178  981274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0314 19:08:34.576189  981274 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 19:08:34.693405  981274 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 19:08:34.693482  981274 main.go:141] libmachine: found compatible host: buildroot
	I0314 19:08:34.693491  981274 main.go:141] libmachine: Provisioning with buildroot...
	I0314 19:08:34.693499  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetMachineName
	I0314 19:08:34.693783  981274 buildroot.go:166] provisioning hostname "kubernetes-upgrade-097195"
	I0314 19:08:34.693813  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetMachineName
	I0314 19:08:34.694033  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:08:34.696689  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.697013  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:34.697044  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.697118  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:08:34.697289  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:34.697473  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:34.697669  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:08:34.697827  981274 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:34.697990  981274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0314 19:08:34.698002  981274 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-097195 && echo "kubernetes-upgrade-097195" | sudo tee /etc/hostname
	I0314 19:08:34.827335  981274 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-097195
	
	I0314 19:08:34.827369  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:08:34.830462  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.830855  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:34.830888  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.831045  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:08:34.831277  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:34.831435  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:34.831670  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:08:34.831876  981274 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:34.832058  981274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0314 19:08:34.832075  981274 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-097195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-097195/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-097195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:08:34.959408  981274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:08:34.959443  981274 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:08:34.959510  981274 buildroot.go:174] setting up certificates
	I0314 19:08:34.959535  981274 provision.go:84] configureAuth start
	I0314 19:08:34.959555  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetMachineName
	I0314 19:08:34.959866  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetIP
	I0314 19:08:34.962496  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.962812  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:34.962839  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.962935  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:08:34.965438  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.965862  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:34.965896  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:34.966035  981274 provision.go:143] copyHostCerts
	I0314 19:08:34.966151  981274 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:08:34.966167  981274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:08:34.966225  981274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:08:34.966340  981274 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:08:34.966353  981274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:08:34.966379  981274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:08:34.966446  981274 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:08:34.966454  981274 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:08:34.966474  981274 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:08:34.966524  981274 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-097195 san=[127.0.0.1 192.168.50.124 kubernetes-upgrade-097195 localhost minikube]
	I0314 19:08:35.559246  981274 provision.go:177] copyRemoteCerts
	I0314 19:08:35.559312  981274 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:08:35.559351  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:08:35.562062  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:35.562441  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:35.562467  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:35.562633  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:08:35.562816  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:35.562962  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:08:35.563062  981274 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa Username:docker}
	I0314 19:08:35.652638  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:08:35.678990  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0314 19:08:35.704518  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:08:35.729446  981274 provision.go:87] duration metric: took 769.892243ms to configureAuth
	I0314 19:08:35.729480  981274 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:08:35.729662  981274 config.go:182] Loaded profile config "kubernetes-upgrade-097195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:08:35.729744  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:08:35.732562  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:35.732939  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:35.732979  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:35.733134  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:08:35.733343  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:35.733507  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:35.733614  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:08:35.733809  981274 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:35.734017  981274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0314 19:08:35.734038  981274 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:08:36.026539  981274 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:08:36.026590  981274 main.go:141] libmachine: Checking connection to Docker...
	I0314 19:08:36.026603  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetURL
	I0314 19:08:36.027908  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | Using libvirt version 6000000
	I0314 19:08:36.030389  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.030784  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:36.030835  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.030980  981274 main.go:141] libmachine: Docker is up and running!
	I0314 19:08:36.030997  981274 main.go:141] libmachine: Reticulating splines...
	I0314 19:08:36.031006  981274 client.go:171] duration metric: took 27.834578123s to LocalClient.Create
	I0314 19:08:36.031039  981274 start.go:167] duration metric: took 27.834657267s to libmachine.API.Create "kubernetes-upgrade-097195"
	I0314 19:08:36.031051  981274 start.go:293] postStartSetup for "kubernetes-upgrade-097195" (driver="kvm2")
	I0314 19:08:36.031068  981274 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:08:36.031089  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:08:36.031324  981274 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:08:36.031346  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:08:36.033619  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.033962  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:36.033985  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.034205  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:08:36.034367  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:36.034539  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:08:36.034707  981274 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa Username:docker}
	I0314 19:08:36.125906  981274 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:08:36.131044  981274 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:08:36.131069  981274 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:08:36.131140  981274 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:08:36.131242  981274 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:08:36.131356  981274 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:08:36.144104  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:08:36.172608  981274 start.go:296] duration metric: took 141.536577ms for postStartSetup
	I0314 19:08:36.172675  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetConfigRaw
	I0314 19:08:36.173332  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetIP
	I0314 19:08:36.176068  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.176500  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:36.176530  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.176818  981274 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/config.json ...
	I0314 19:08:36.177083  981274 start.go:128] duration metric: took 28.002828014s to createHost
	I0314 19:08:36.177120  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:08:36.179358  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.179752  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:36.179797  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.179870  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:08:36.180062  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:36.180223  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:36.180363  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:08:36.180521  981274 main.go:141] libmachine: Using SSH client type: native
	I0314 19:08:36.180744  981274 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0314 19:08:36.180759  981274 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 19:08:36.298736  981274 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710443316.285616994
	
	I0314 19:08:36.298765  981274 fix.go:216] guest clock: 1710443316.285616994
	I0314 19:08:36.298777  981274 fix.go:229] Guest: 2024-03-14 19:08:36.285616994 +0000 UTC Remote: 2024-03-14 19:08:36.1771027 +0000 UTC m=+52.071370957 (delta=108.514294ms)
	I0314 19:08:36.298850  981274 fix.go:200] guest clock delta is within tolerance: 108.514294ms
	I0314 19:08:36.298862  981274 start.go:83] releasing machines lock for "kubernetes-upgrade-097195", held for 28.124788209s
	I0314 19:08:36.298894  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:08:36.299257  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetIP
	I0314 19:08:36.302255  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.302712  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:36.302741  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.302906  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:08:36.303407  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:08:36.303614  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:08:36.303702  981274 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:08:36.303749  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:08:36.303893  981274 ssh_runner.go:195] Run: cat /version.json
	I0314 19:08:36.303928  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:08:36.306643  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.306928  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.307077  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:36.307107  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.307347  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:36.307382  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:08:36.307389  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:36.307580  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:08:36.307589  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:36.307835  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:08:36.307841  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:08:36.308015  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:08:36.308020  981274 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa Username:docker}
	I0314 19:08:36.308236  981274 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa Username:docker}
	I0314 19:08:36.424620  981274 ssh_runner.go:195] Run: systemctl --version
	I0314 19:08:36.432179  981274 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:08:36.598179  981274 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:08:36.605547  981274 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:08:36.605615  981274 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:08:36.630087  981274 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:08:36.630115  981274 start.go:494] detecting cgroup driver to use...
	I0314 19:08:36.630183  981274 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:08:36.648017  981274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:08:36.668657  981274 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:08:36.668739  981274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:08:36.683343  981274 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:08:36.698354  981274 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:08:36.836527  981274 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:08:37.023133  981274 docker.go:233] disabling docker service ...
	I0314 19:08:37.023216  981274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:08:37.041294  981274 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:08:37.056362  981274 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:08:37.191803  981274 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:08:37.317158  981274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:08:37.334686  981274 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:08:37.357011  981274 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 19:08:37.357084  981274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:08:37.369352  981274 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:08:37.369417  981274 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:08:37.382925  981274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:08:37.400520  981274 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:08:37.414142  981274 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:08:37.428065  981274 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:08:37.438899  981274 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:08:37.438958  981274 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:08:37.453460  981274 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:08:37.464103  981274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:08:37.595237  981274 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:08:37.746106  981274 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:08:37.746193  981274 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:08:37.751720  981274 start.go:562] Will wait 60s for crictl version
	I0314 19:08:37.751773  981274 ssh_runner.go:195] Run: which crictl
	I0314 19:08:37.756972  981274 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:08:37.801059  981274 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:08:37.801169  981274 ssh_runner.go:195] Run: crio --version
	I0314 19:08:37.836379  981274 ssh_runner.go:195] Run: crio --version
	I0314 19:08:37.880899  981274 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 19:08:37.882291  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetIP
	I0314 19:08:37.885533  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:37.886027  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:08:24 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:08:37.886060  981274 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:08:37.886358  981274 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 19:08:37.891398  981274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:08:37.910486  981274 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-097195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.20.0 ClusterName:kubernetes-upgrade-097195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.124 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimi
zations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:08:37.910612  981274 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:08:37.910668  981274 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:08:37.950034  981274 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:08:37.950118  981274 ssh_runner.go:195] Run: which lz4
	I0314 19:08:37.955103  981274 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0314 19:08:37.960304  981274 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:08:37.960333  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 19:08:40.095522  981274 crio.go:444] duration metric: took 2.140468795s to copy over tarball
	I0314 19:08:40.095673  981274 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:08:43.375693  981274 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.279959359s)
	I0314 19:08:43.375736  981274 crio.go:451] duration metric: took 3.280175325s to extract the tarball
	I0314 19:08:43.375744  981274 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:08:43.424826  981274 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:08:43.478351  981274 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:08:43.478379  981274 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:08:43.478445  981274 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:08:43.478466  981274 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:08:43.478488  981274 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 19:08:43.478446  981274 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 19:08:43.478536  981274 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:08:43.478538  981274 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:08:43.478506  981274 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:08:43.478441  981274 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:08:43.479940  981274 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:08:43.479962  981274 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 19:08:43.479977  981274 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:08:43.480007  981274 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:08:43.480012  981274 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:08:43.480031  981274 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:08:43.480055  981274 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 19:08:43.480056  981274 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:08:43.662431  981274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:08:43.663762  981274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 19:08:43.667825  981274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:08:43.679079  981274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:08:43.698834  981274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 19:08:43.701248  981274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:08:43.706671  981274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 19:08:43.763578  981274 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 19:08:43.763643  981274 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:08:43.763693  981274 ssh_runner.go:195] Run: which crictl
	I0314 19:08:43.779507  981274 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:08:43.830752  981274 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 19:08:43.830819  981274 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:08:43.830870  981274 ssh_runner.go:195] Run: which crictl
	I0314 19:08:43.882897  981274 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 19:08:43.882952  981274 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:08:43.883028  981274 ssh_runner.go:195] Run: which crictl
	I0314 19:08:43.915254  981274 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 19:08:43.915309  981274 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 19:08:43.915327  981274 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 19:08:43.915357  981274 ssh_runner.go:195] Run: which crictl
	I0314 19:08:43.915366  981274 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:08:43.915412  981274 ssh_runner.go:195] Run: which crictl
	I0314 19:08:43.915462  981274 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 19:08:43.915254  981274 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 19:08:43.915483  981274 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 19:08:43.915501  981274 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:08:43.915510  981274 ssh_runner.go:195] Run: which crictl
	I0314 19:08:43.915518  981274 ssh_runner.go:195] Run: which crictl
	I0314 19:08:43.915572  981274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:08:44.046502  981274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 19:08:44.046558  981274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:08:44.046610  981274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:08:44.046667  981274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 19:08:44.046712  981274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 19:08:44.046713  981274 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 19:08:44.046775  981274 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:08:44.193993  981274 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 19:08:44.194024  981274 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 19:08:44.194029  981274 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 19:08:44.194104  981274 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 19:08:44.194118  981274 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 19:08:44.194176  981274 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 19:08:44.265971  981274 cache_images.go:92] duration metric: took 787.573617ms to LoadCachedImages
	W0314 19:08:44.266075  981274 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0: no such file or directory
	I0314 19:08:44.266095  981274 kubeadm.go:928] updating node { 192.168.50.124 8443 v1.20.0 crio true true} ...
	I0314 19:08:44.266241  981274 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=kubernetes-upgrade-097195 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.50.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:kubernetes-upgrade-097195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:08:44.266326  981274 ssh_runner.go:195] Run: crio config
	I0314 19:08:44.321537  981274 cni.go:84] Creating CNI manager for ""
	I0314 19:08:44.321569  981274 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:08:44.321586  981274 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:08:44.321607  981274 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.124 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-097195 NodeName:kubernetes-upgrade-097195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 19:08:44.321783  981274 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "kubernetes-upgrade-097195"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.124
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.124"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:08:44.321855  981274 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 19:08:44.332977  981274 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:08:44.333057  981274 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:08:44.347286  981274 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (433 bytes)
	I0314 19:08:44.368927  981274 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:08:44.387516  981274 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2126 bytes)
	I0314 19:08:44.406601  981274 ssh_runner.go:195] Run: grep 192.168.50.124	control-plane.minikube.internal$ /etc/hosts
	I0314 19:08:44.411585  981274 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.124	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:08:44.425613  981274 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:08:44.563986  981274 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:08:44.588498  981274 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195 for IP: 192.168.50.124
	I0314 19:08:44.588521  981274 certs.go:194] generating shared ca certs ...
	I0314 19:08:44.588538  981274 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:44.588715  981274 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:08:44.588758  981274 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:08:44.588769  981274 certs.go:256] generating profile certs ...
	I0314 19:08:44.588820  981274 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/client.key
	I0314 19:08:44.588834  981274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/client.crt with IP's: []
	I0314 19:08:44.693147  981274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/client.crt ...
	I0314 19:08:44.693200  981274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/client.crt: {Name:mk8c651af1ba916a3d5dd95e96c5f8fb4638e908 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:44.693407  981274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/client.key ...
	I0314 19:08:44.693424  981274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/client.key: {Name:mk35bda12a5481f0b09266d8b1e0aaf9fba36336 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:44.693547  981274 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.key.51bc1de3
	I0314 19:08:44.693567  981274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.crt.51bc1de3 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.50.124]
	I0314 19:08:44.862927  981274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.crt.51bc1de3 ...
	I0314 19:08:44.862964  981274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.crt.51bc1de3: {Name:mk4fcc449a315954af16f46c8028dbd8af407484 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:44.863154  981274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.key.51bc1de3 ...
	I0314 19:08:44.863168  981274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.key.51bc1de3: {Name:mk9592d287d75463fa55489741a4cf0b7d12a0ed Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:44.863250  981274 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.crt.51bc1de3 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.crt
	I0314 19:08:44.863322  981274 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.key.51bc1de3 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.key
	I0314 19:08:44.863374  981274 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/proxy-client.key
	I0314 19:08:44.863393  981274 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/proxy-client.crt with IP's: []
	I0314 19:08:44.956684  981274 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/proxy-client.crt ...
	I0314 19:08:44.956713  981274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/proxy-client.crt: {Name:mk3cb1d790d991c8f24986c47bd648e1fdc8b6c6 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:44.956878  981274 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/proxy-client.key ...
	I0314 19:08:44.956895  981274 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/proxy-client.key: {Name:mk4429be3b6037f62e30eafe92f7090240959940 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:08:44.957113  981274 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:08:44.957169  981274 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:08:44.957184  981274 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:08:44.957222  981274 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:08:44.957260  981274 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:08:44.957286  981274 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:08:44.957337  981274 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:08:44.957944  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:08:44.996092  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:08:45.032316  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:08:45.061674  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:08:45.117600  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0314 19:08:45.167688  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:08:45.195143  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:08:45.221893  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:08:45.251389  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:08:45.278482  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:08:45.307802  981274 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:08:45.340580  981274 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:08:45.362522  981274 ssh_runner.go:195] Run: openssl version
	I0314 19:08:45.371218  981274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:08:45.386762  981274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:08:45.392492  981274 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:08:45.392565  981274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:08:45.399108  981274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:08:45.411240  981274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:08:45.426345  981274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:08:45.436818  981274 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:08:45.436882  981274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:08:45.445401  981274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:08:45.464550  981274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:08:45.478252  981274 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:08:45.486029  981274 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:08:45.486099  981274 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:08:45.497703  981274 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:08:45.521256  981274 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:08:45.528198  981274 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:08:45.528279  981274 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-097195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.20.0 ClusterName:kubernetes-upgrade-097195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.124 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizat
ions:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:08:45.528379  981274 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:08:45.528474  981274 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:08:45.584746  981274 cri.go:89] found id: ""
	I0314 19:08:45.584823  981274 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 19:08:45.596668  981274 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:08:45.607732  981274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:08:45.618329  981274 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:08:45.618352  981274 kubeadm.go:156] found existing configuration files:
	
	I0314 19:08:45.618398  981274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:08:45.628132  981274 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:08:45.628184  981274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:08:45.638460  981274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:08:45.649025  981274 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:08:45.649084  981274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:08:45.658776  981274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:08:45.668785  981274 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:08:45.668845  981274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:08:45.678737  981274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:08:45.688487  981274 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:08:45.688561  981274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:08:45.698259  981274 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:08:45.845562  981274 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:08:45.845725  981274 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:08:46.020877  981274 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:08:46.021052  981274 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:08:46.021205  981274 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:08:46.301951  981274 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:08:46.304268  981274 out.go:204]   - Generating certificates and keys ...
	I0314 19:08:46.304390  981274 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:08:46.304473  981274 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:08:46.709851  981274 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 19:08:46.898537  981274 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 19:08:47.050645  981274 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 19:08:47.547790  981274 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 19:08:47.710580  981274 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 19:08:47.717209  981274 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-097195 localhost] and IPs [192.168.50.124 127.0.0.1 ::1]
	I0314 19:08:48.017045  981274 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 19:08:48.017444  981274 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-097195 localhost] and IPs [192.168.50.124 127.0.0.1 ::1]
	I0314 19:08:48.129781  981274 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 19:08:48.471218  981274 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 19:08:49.085838  981274 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 19:08:49.086238  981274 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:08:49.191492  981274 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:08:49.443749  981274 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:08:49.517203  981274 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:08:49.735932  981274 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:08:49.759260  981274 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:08:49.759423  981274 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:08:49.759486  981274 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:08:49.907434  981274 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:08:49.909157  981274 out.go:204]   - Booting up control plane ...
	I0314 19:08:49.909278  981274 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:08:49.925203  981274 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:08:49.926602  981274 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:08:49.927653  981274 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:08:49.932454  981274 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:09:29.931428  981274 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:09:29.932161  981274 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:09:29.932468  981274 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:09:34.933442  981274 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:09:34.933763  981274 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:09:44.934321  981274 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:09:44.934523  981274 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:10:04.935951  981274 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:10:04.936252  981274 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:10:44.936791  981274 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:10:44.937340  981274 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:10:44.937357  981274 kubeadm.go:309] 
	I0314 19:10:44.937449  981274 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:10:44.937540  981274 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:10:44.937551  981274 kubeadm.go:309] 
	I0314 19:10:44.937622  981274 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:10:44.937714  981274 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:10:44.937896  981274 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:10:44.937904  981274 kubeadm.go:309] 
	I0314 19:10:44.938119  981274 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:10:44.938195  981274 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:10:44.938275  981274 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:10:44.938290  981274 kubeadm.go:309] 
	I0314 19:10:44.938541  981274 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:10:44.938745  981274 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:10:44.938757  981274 kubeadm.go:309] 
	I0314 19:10:44.938961  981274 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:10:44.939139  981274 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:10:44.939294  981274 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:10:44.939466  981274 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:10:44.939492  981274 kubeadm.go:309] 
	I0314 19:10:44.939739  981274 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:10:44.939926  981274 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	W0314 19:10:44.940269  981274 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-097195 localhost] and IPs [192.168.50.124 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-097195 localhost] and IPs [192.168.50.124 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [kubernetes-upgrade-097195 localhost] and IPs [192.168.50.124 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [kubernetes-upgrade-097195 localhost] and IPs [192.168.50.124 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 19:10:44.940332  981274 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:10:44.940705  981274 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:10:46.998146  981274 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (2.057776754s)
	I0314 19:10:46.998254  981274 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:10:47.018427  981274 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:10:47.032148  981274 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:10:47.032174  981274 kubeadm.go:156] found existing configuration files:
	
	I0314 19:10:47.032245  981274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:10:47.043287  981274 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:10:47.043353  981274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:10:47.055099  981274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:10:47.067261  981274 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:10:47.067320  981274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:10:47.079775  981274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:10:47.092000  981274 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:10:47.092062  981274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:10:47.106497  981274 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:10:47.117379  981274 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:10:47.117445  981274 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:10:47.128976  981274 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:10:47.220359  981274 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:10:47.220500  981274 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:10:47.424673  981274 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:10:47.424820  981274 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:10:47.424908  981274 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:10:47.692868  981274 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:10:47.694563  981274 out.go:204]   - Generating certificates and keys ...
	I0314 19:10:47.694693  981274 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:10:47.694768  981274 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:10:47.694886  981274 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:10:47.695254  981274 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:10:47.695613  981274 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:10:47.696066  981274 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:10:47.696362  981274 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:10:47.696710  981274 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:10:47.697236  981274 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:10:47.697731  981274 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:10:47.697908  981274 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:10:47.697980  981274 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:10:47.763215  981274 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:10:48.038154  981274 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:10:48.269202  981274 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:10:48.331198  981274 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:10:48.352418  981274 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:10:48.354394  981274 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:10:48.354472  981274 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:10:48.549419  981274 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:10:48.551837  981274 out.go:204]   - Booting up control plane ...
	I0314 19:10:48.551991  981274 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:10:48.561980  981274 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:10:48.562085  981274 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:10:48.562874  981274 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:10:48.565403  981274 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:11:28.567852  981274 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:11:28.568557  981274 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:11:28.568818  981274 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:11:33.569489  981274 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:11:33.569730  981274 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:11:43.570553  981274 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:11:43.570856  981274 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:12:03.572437  981274 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:12:03.572859  981274 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:12:43.572914  981274 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:12:43.573204  981274 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:12:43.573229  981274 kubeadm.go:309] 
	I0314 19:12:43.573276  981274 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:12:43.573339  981274 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:12:43.573350  981274 kubeadm.go:309] 
	I0314 19:12:43.573382  981274 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:12:43.573410  981274 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:12:43.573492  981274 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:12:43.573505  981274 kubeadm.go:309] 
	I0314 19:12:43.573605  981274 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:12:43.573635  981274 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:12:43.573673  981274 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:12:43.573698  981274 kubeadm.go:309] 
	I0314 19:12:43.573827  981274 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:12:43.573895  981274 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:12:43.573901  981274 kubeadm.go:309] 
	I0314 19:12:43.574013  981274 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:12:43.574117  981274 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:12:43.574213  981274 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:12:43.574304  981274 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:12:43.574315  981274 kubeadm.go:309] 
	I0314 19:12:43.576009  981274 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:12:43.576164  981274 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:12:43.576289  981274 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:12:43.576385  981274 kubeadm.go:393] duration metric: took 3m58.048112016s to StartCluster
	I0314 19:12:43.576464  981274 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:12:43.576537  981274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:12:43.652917  981274 cri.go:89] found id: ""
	I0314 19:12:43.652946  981274 logs.go:276] 0 containers: []
	W0314 19:12:43.652955  981274 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:12:43.652960  981274 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:12:43.653021  981274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:12:43.708747  981274 cri.go:89] found id: ""
	I0314 19:12:43.708782  981274 logs.go:276] 0 containers: []
	W0314 19:12:43.708793  981274 logs.go:278] No container was found matching "etcd"
	I0314 19:12:43.708801  981274 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:12:43.708872  981274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:12:43.770493  981274 cri.go:89] found id: ""
	I0314 19:12:43.770528  981274 logs.go:276] 0 containers: []
	W0314 19:12:43.770540  981274 logs.go:278] No container was found matching "coredns"
	I0314 19:12:43.770548  981274 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:12:43.770607  981274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:12:43.818540  981274 cri.go:89] found id: ""
	I0314 19:12:43.818570  981274 logs.go:276] 0 containers: []
	W0314 19:12:43.818582  981274 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:12:43.818590  981274 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:12:43.818655  981274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:12:43.868520  981274 cri.go:89] found id: ""
	I0314 19:12:43.868554  981274 logs.go:276] 0 containers: []
	W0314 19:12:43.868566  981274 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:12:43.868575  981274 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:12:43.868642  981274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:12:43.919874  981274 cri.go:89] found id: ""
	I0314 19:12:43.919908  981274 logs.go:276] 0 containers: []
	W0314 19:12:43.919920  981274 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:12:43.919929  981274 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:12:43.919998  981274 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:12:43.970617  981274 cri.go:89] found id: ""
	I0314 19:12:43.970651  981274 logs.go:276] 0 containers: []
	W0314 19:12:43.970663  981274 logs.go:278] No container was found matching "kindnet"
	I0314 19:12:43.970677  981274 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:12:43.970694  981274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:12:44.103596  981274 logs.go:123] Gathering logs for container status ...
	I0314 19:12:44.103645  981274 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:12:44.162169  981274 logs.go:123] Gathering logs for kubelet ...
	I0314 19:12:44.163204  981274 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:12:44.238959  981274 logs.go:123] Gathering logs for dmesg ...
	I0314 19:12:44.239005  981274 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:12:44.266684  981274 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:12:44.266719  981274 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:12:44.446380  981274 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	W0314 19:12:44.446425  981274 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 19:12:44.446476  981274 out.go:239] * 
	* 
	W0314 19:12:44.446547  981274 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:12:44.446578  981274 out.go:239] * 
	* 
	W0314 19:12:44.447721  981274 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:12:44.451308  981274 out.go:177] 
	W0314 19:12:44.452683  981274 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:12:44.452727  981274 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 19:12:44.452755  981274 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 19:12:44.454205  981274 out.go:177] 

                                                
                                                
** /stderr **
version_upgrade_test.go:224: failed to start minikube HEAD with oldest k8s version: out/minikube-linux-amd64 start -p kubernetes-upgrade-097195 --memory=2200 --kubernetes-version=v1.20.0 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: exit status 109
version_upgrade_test.go:227: (dbg) Run:  out/minikube-linux-amd64 stop -p kubernetes-upgrade-097195
version_upgrade_test.go:227: (dbg) Done: out/minikube-linux-amd64 stop -p kubernetes-upgrade-097195: (2.35104362s)
version_upgrade_test.go:232: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-097195 status --format={{.Host}}
version_upgrade_test.go:232: (dbg) Non-zero exit: out/minikube-linux-amd64 -p kubernetes-upgrade-097195 status --format={{.Host}}: exit status 7 (92.516091ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
version_upgrade_test.go:234: status error: exit status 7 (may be ok)
version_upgrade_test.go:243: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-097195 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:243: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-097195 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m25.961177484s)
version_upgrade_test.go:248: (dbg) Run:  kubectl --context kubernetes-upgrade-097195 version --output=json
version_upgrade_test.go:267: Attempting to downgrade Kubernetes (should fail)
version_upgrade_test.go:269: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-097195 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:269: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p kubernetes-upgrade-097195 --memory=2200 --kubernetes-version=v1.20.0 --driver=kvm2  --container-runtime=crio: exit status 106 (94.083232ms)

                                                
                                                
-- stdout --
	* [kubernetes-upgrade-097195] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to K8S_DOWNGRADE_UNSUPPORTED: Unable to safely downgrade existing Kubernetes v1.29.0-rc.2 cluster to v1.20.0
	* Suggestion: 
	
	    1) Recreate the cluster with Kubernetes 1.20.0, by running:
	    
	    minikube delete -p kubernetes-upgrade-097195
	    minikube start -p kubernetes-upgrade-097195 --kubernetes-version=v1.20.0
	    
	    2) Create a second cluster with Kubernetes 1.20.0, by running:
	    
	    minikube start -p kubernetes-upgrade-0971952 --kubernetes-version=v1.20.0
	    
	    3) Use the existing cluster at version Kubernetes 1.29.0-rc.2, by running:
	    
	    minikube start -p kubernetes-upgrade-097195 --kubernetes-version=v1.29.0-rc.2
	    

                                                
                                                
** /stderr **
version_upgrade_test.go:273: Attempting restart after unsuccessful downgrade
version_upgrade_test.go:275: (dbg) Run:  out/minikube-linux-amd64 start -p kubernetes-upgrade-097195 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:275: (dbg) Done: out/minikube-linux-amd64 start -p kubernetes-upgrade-097195 --memory=2200 --kubernetes-version=v1.29.0-rc.2 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m16.356320515s)
version_upgrade_test.go:279: *** TestKubernetesUpgrade FAILED at 2024-03-14 19:15:29.437707089 +0000 UTC m=+4255.923565986
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p kubernetes-upgrade-097195 -n kubernetes-upgrade-097195
helpers_test.go:244: <<< TestKubernetesUpgrade FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestKubernetesUpgrade]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p kubernetes-upgrade-097195 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p kubernetes-upgrade-097195 logs -n 25: (2.361026066s)
helpers_test.go:252: TestKubernetesUpgrade logs: 
-- stdout --
	
	==> Audit <==
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| Command |                 Args                  |          Profile          |  User   | Version |     Start Time      |      End Time       |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p cilium-058224 sudo crio            | cilium-058224             | jenkins | v1.32.0 | 14 Mar 24 19:11 UTC |                     |
	|         | config                                |                           |         |         |                     |                     |
	| delete  | -p cilium-058224                      | cilium-058224             | jenkins | v1.32.0 | 14 Mar 24 19:11 UTC | 14 Mar 24 19:11 UTC |
	| start   | -p force-systemd-env-748636           | force-systemd-env-748636  | jenkins | v1.32.0 | 14 Mar 24 19:11 UTC | 14 Mar 24 19:12 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=5 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p NoKubernetes-578974                | NoKubernetes-578974       | jenkins | v1.32.0 | 14 Mar 24 19:11 UTC | 14 Mar 24 19:12 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | force-systemd-flag-234927 ssh cat     | force-systemd-flag-234927 | jenkins | v1.32.0 | 14 Mar 24 19:12 UTC | 14 Mar 24 19:12 UTC |
	|         | /etc/crio/crio.conf.d/02-crio.conf    |                           |         |         |                     |                     |
	| delete  | -p force-systemd-flag-234927          | force-systemd-flag-234927 | jenkins | v1.32.0 | 14 Mar 24 19:12 UTC | 14 Mar 24 19:12 UTC |
	| start   | -p cert-expiration-525214             | cert-expiration-525214    | jenkins | v1.32.0 | 14 Mar 24 19:12 UTC | 14 Mar 24 19:13 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --cert-expiration=3m                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-578974                | NoKubernetes-578974       | jenkins | v1.32.0 | 14 Mar 24 19:12 UTC | 14 Mar 24 19:12 UTC |
	| start   | -p NoKubernetes-578974                | NoKubernetes-578974       | jenkins | v1.32.0 | 14 Mar 24 19:12 UTC | 14 Mar 24 19:13 UTC |
	|         | --no-kubernetes --driver=kvm2         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| delete  | -p force-systemd-env-748636           | force-systemd-env-748636  | jenkins | v1.32.0 | 14 Mar 24 19:12 UTC | 14 Mar 24 19:12 UTC |
	| start   | -p cert-options-840108                | cert-options-840108       | jenkins | v1.32.0 | 14 Mar 24 19:12 UTC | 14 Mar 24 19:13 UTC |
	|         | --memory=2048                         |                           |         |         |                     |                     |
	|         | --apiserver-ips=127.0.0.1             |                           |         |         |                     |                     |
	|         | --apiserver-ips=192.168.15.15         |                           |         |         |                     |                     |
	|         | --apiserver-names=localhost           |                           |         |         |                     |                     |
	|         | --apiserver-names=www.google.com      |                           |         |         |                     |                     |
	|         | --apiserver-port=8555                 |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| stop    | -p kubernetes-upgrade-097195          | kubernetes-upgrade-097195 | jenkins | v1.32.0 | 14 Mar 24 19:12 UTC | 14 Mar 24 19:12 UTC |
	| start   | -p kubernetes-upgrade-097195          | kubernetes-upgrade-097195 | jenkins | v1.32.0 | 14 Mar 24 19:12 UTC | 14 Mar 24 19:14 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-578974 sudo           | NoKubernetes-578974       | jenkins | v1.32.0 | 14 Mar 24 19:13 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| stop    | -p NoKubernetes-578974                | NoKubernetes-578974       | jenkins | v1.32.0 | 14 Mar 24 19:13 UTC | 14 Mar 24 19:13 UTC |
	| start   | -p NoKubernetes-578974                | NoKubernetes-578974       | jenkins | v1.32.0 | 14 Mar 24 19:13 UTC | 14 Mar 24 19:14 UTC |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | cert-options-840108 ssh               | cert-options-840108       | jenkins | v1.32.0 | 14 Mar 24 19:13 UTC | 14 Mar 24 19:13 UTC |
	|         | openssl x509 -text -noout -in         |                           |         |         |                     |                     |
	|         | /var/lib/minikube/certs/apiserver.crt |                           |         |         |                     |                     |
	| ssh     | -p cert-options-840108 -- sudo        | cert-options-840108       | jenkins | v1.32.0 | 14 Mar 24 19:13 UTC | 14 Mar 24 19:13 UTC |
	|         | cat /etc/kubernetes/admin.conf        |                           |         |         |                     |                     |
	| delete  | -p cert-options-840108                | cert-options-840108       | jenkins | v1.32.0 | 14 Mar 24 19:13 UTC | 14 Mar 24 19:13 UTC |
	| start   | -p old-k8s-version-968094             | old-k8s-version-968094    | jenkins | v1.32.0 | 14 Mar 24 19:13 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --alsologtostderr --wait=true         |                           |         |         |                     |                     |
	|         | --kvm-network=default                 |                           |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system         |                           |         |         |                     |                     |
	|         | --disable-driver-mounts               |                           |         |         |                     |                     |
	|         | --keep-context=false                  |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-097195          | kubernetes-upgrade-097195 | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC |                     |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0          |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| start   | -p kubernetes-upgrade-097195          | kubernetes-upgrade-097195 | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:15 UTC |
	|         | --memory=2200                         |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|         | --alsologtostderr                     |                           |         |         |                     |                     |
	|         | -v=1 --driver=kvm2                    |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	| ssh     | -p NoKubernetes-578974 sudo           | NoKubernetes-578974       | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC |                     |
	|         | systemctl is-active --quiet           |                           |         |         |                     |                     |
	|         | service kubelet                       |                           |         |         |                     |                     |
	| delete  | -p NoKubernetes-578974                | NoKubernetes-578974       | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:14 UTC |
	| start   | -p no-preload-731976                  | no-preload-731976         | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC |                     |
	|         | --memory=2200 --alsologtostderr       |                           |         |         |                     |                     |
	|         | --wait=true --preload=false           |                           |         |         |                     |                     |
	|         | --driver=kvm2                         |                           |         |         |                     |                     |
	|         | --container-runtime=crio              |                           |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2     |                           |         |         |                     |                     |
	|---------|---------------------------------------|---------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:14:17
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:14:17.920360  988919 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:14:17.920495  988919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:14:17.920506  988919 out.go:304] Setting ErrFile to fd 2...
	I0314 19:14:17.920513  988919 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:14:17.920733  988919 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:14:17.921379  988919 out.go:298] Setting JSON to false
	I0314 19:14:17.922418  988919 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":97010,"bootTime":1710346648,"procs":220,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:14:17.922487  988919 start.go:139] virtualization: kvm guest
	I0314 19:14:17.924664  988919 out.go:177] * [no-preload-731976] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:14:17.926554  988919 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:14:17.928092  988919 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:14:17.926566  988919 notify.go:220] Checking for updates...
	I0314 19:14:17.929547  988919 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:14:17.930830  988919 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:14:17.932078  988919 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:14:17.933343  988919 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:14:17.935186  988919 config.go:182] Loaded profile config "cert-expiration-525214": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:14:17.935296  988919 config.go:182] Loaded profile config "kubernetes-upgrade-097195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:14:17.935402  988919 config.go:182] Loaded profile config "old-k8s-version-968094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:14:17.935526  988919 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:14:17.975212  988919 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 19:14:17.976499  988919 start.go:297] selected driver: kvm2
	I0314 19:14:17.976518  988919 start.go:901] validating driver "kvm2" against <nil>
	I0314 19:14:17.976530  988919 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:14:17.977457  988919 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:14:17.977546  988919 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:14:17.993570  988919 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:14:17.993610  988919 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 19:14:17.993835  988919 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:14:17.993864  988919 cni.go:84] Creating CNI manager for ""
	I0314 19:14:17.993871  988919 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:14:17.993878  988919 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 19:14:17.993923  988919 start.go:340] cluster config:
	{Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP:
SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:14:17.994008  988919 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:14:17.995688  988919 out.go:177] * Starting "no-preload-731976" primary control-plane node in "no-preload-731976" cluster
	I0314 19:14:13.234467  988684 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 19:14:13.234498  988684 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0314 19:14:13.234506  988684 cache.go:56] Caching tarball of preloaded images
	I0314 19:14:13.234596  988684 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:14:13.234611  988684 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0314 19:14:13.234737  988684 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/config.json ...
	I0314 19:14:13.234950  988684 start.go:360] acquireMachinesLock for kubernetes-upgrade-097195: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:14:14.979586  988436 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 19:14:14.979774  988436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:14:14.979842  988436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:14:14.996969  988436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0314 19:14:14.997375  988436 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:14:14.997882  988436 main.go:141] libmachine: Using API Version  1
	I0314 19:14:14.997906  988436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:14:14.998341  988436 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:14:14.998669  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:14:14.998872  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:14.999049  988436 start.go:159] libmachine.API.Create for "old-k8s-version-968094" (driver="kvm2")
	I0314 19:14:14.999078  988436 client.go:168] LocalClient.Create starting
	I0314 19:14:14.999125  988436 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 19:14:14.999170  988436 main.go:141] libmachine: Decoding PEM data...
	I0314 19:14:14.999194  988436 main.go:141] libmachine: Parsing certificate...
	I0314 19:14:14.999299  988436 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 19:14:14.999324  988436 main.go:141] libmachine: Decoding PEM data...
	I0314 19:14:14.999335  988436 main.go:141] libmachine: Parsing certificate...
	I0314 19:14:14.999353  988436 main.go:141] libmachine: Running pre-create checks...
	I0314 19:14:14.999578  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .PreCreateCheck
	I0314 19:14:15.001322  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:14:15.001814  988436 main.go:141] libmachine: Creating machine...
	I0314 19:14:15.001831  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .Create
	I0314 19:14:15.002002  988436 main.go:141] libmachine: (old-k8s-version-968094) Creating KVM machine...
	I0314 19:14:15.003167  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found existing default KVM network
	I0314 19:14:15.004844  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.004693  988720 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:01:d2:f4} reservation:<nil>}
	I0314 19:14:15.005768  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.005684  988720 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2b:d3:3a} reservation:<nil>}
	I0314 19:14:15.006646  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.006567  988720 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:24:75:87} reservation:<nil>}
	I0314 19:14:15.007824  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.007730  988720 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000352480}
	I0314 19:14:15.007850  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | created network xml: 
	I0314 19:14:15.007864  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | <network>
	I0314 19:14:15.007873  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   <name>mk-old-k8s-version-968094</name>
	I0314 19:14:15.007887  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   <dns enable='no'/>
	I0314 19:14:15.007897  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   
	I0314 19:14:15.007913  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0314 19:14:15.007926  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |     <dhcp>
	I0314 19:14:15.007936  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0314 19:14:15.007946  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |     </dhcp>
	I0314 19:14:15.007954  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   </ip>
	I0314 19:14:15.007965  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   
	I0314 19:14:15.007974  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | </network>
	I0314 19:14:15.007980  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | 
	I0314 19:14:15.013775  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | trying to create private KVM network mk-old-k8s-version-968094 192.168.72.0/24...
	I0314 19:14:15.087225  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | private KVM network mk-old-k8s-version-968094 192.168.72.0/24 created
	I0314 19:14:15.087259  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.087173  988720 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:14:15.087278  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094 ...
	I0314 19:14:15.087297  988436 main.go:141] libmachine: (old-k8s-version-968094) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 19:14:15.087352  988436 main.go:141] libmachine: (old-k8s-version-968094) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 19:14:15.334181  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.334042  988720 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa...
	I0314 19:14:15.502173  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.502014  988720 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/old-k8s-version-968094.rawdisk...
	I0314 19:14:15.502210  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Writing magic tar header
	I0314 19:14:15.502233  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Writing SSH key tar header
	I0314 19:14:15.502248  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.502195  988720 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094 ...
	I0314 19:14:15.502340  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094
	I0314 19:14:15.502378  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094 (perms=drwx------)
	I0314 19:14:15.502394  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 19:14:15.502415  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:14:15.502428  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 19:14:15.502444  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 19:14:15.502456  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 19:14:15.502473  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 19:14:15.502487  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins
	I0314 19:14:15.502501  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 19:14:15.502516  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 19:14:15.502527  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 19:14:15.502542  988436 main.go:141] libmachine: (old-k8s-version-968094) Creating domain...
	I0314 19:14:15.502553  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home
	I0314 19:14:15.502573  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Skipping /home - not owner
	I0314 19:14:15.503897  988436 main.go:141] libmachine: (old-k8s-version-968094) define libvirt domain using xml: 
	I0314 19:14:15.503919  988436 main.go:141] libmachine: (old-k8s-version-968094) <domain type='kvm'>
	I0314 19:14:15.503926  988436 main.go:141] libmachine: (old-k8s-version-968094)   <name>old-k8s-version-968094</name>
	I0314 19:14:15.503932  988436 main.go:141] libmachine: (old-k8s-version-968094)   <memory unit='MiB'>2200</memory>
	I0314 19:14:15.503938  988436 main.go:141] libmachine: (old-k8s-version-968094)   <vcpu>2</vcpu>
	I0314 19:14:15.503942  988436 main.go:141] libmachine: (old-k8s-version-968094)   <features>
	I0314 19:14:15.503950  988436 main.go:141] libmachine: (old-k8s-version-968094)     <acpi/>
	I0314 19:14:15.503955  988436 main.go:141] libmachine: (old-k8s-version-968094)     <apic/>
	I0314 19:14:15.503960  988436 main.go:141] libmachine: (old-k8s-version-968094)     <pae/>
	I0314 19:14:15.503965  988436 main.go:141] libmachine: (old-k8s-version-968094)     
	I0314 19:14:15.503970  988436 main.go:141] libmachine: (old-k8s-version-968094)   </features>
	I0314 19:14:15.503977  988436 main.go:141] libmachine: (old-k8s-version-968094)   <cpu mode='host-passthrough'>
	I0314 19:14:15.503983  988436 main.go:141] libmachine: (old-k8s-version-968094)   
	I0314 19:14:15.503988  988436 main.go:141] libmachine: (old-k8s-version-968094)   </cpu>
	I0314 19:14:15.503994  988436 main.go:141] libmachine: (old-k8s-version-968094)   <os>
	I0314 19:14:15.504002  988436 main.go:141] libmachine: (old-k8s-version-968094)     <type>hvm</type>
	I0314 19:14:15.504008  988436 main.go:141] libmachine: (old-k8s-version-968094)     <boot dev='cdrom'/>
	I0314 19:14:15.504019  988436 main.go:141] libmachine: (old-k8s-version-968094)     <boot dev='hd'/>
	I0314 19:14:15.504027  988436 main.go:141] libmachine: (old-k8s-version-968094)     <bootmenu enable='no'/>
	I0314 19:14:15.504032  988436 main.go:141] libmachine: (old-k8s-version-968094)   </os>
	I0314 19:14:15.504038  988436 main.go:141] libmachine: (old-k8s-version-968094)   <devices>
	I0314 19:14:15.504043  988436 main.go:141] libmachine: (old-k8s-version-968094)     <disk type='file' device='cdrom'>
	I0314 19:14:15.504053  988436 main.go:141] libmachine: (old-k8s-version-968094)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/boot2docker.iso'/>
	I0314 19:14:15.504064  988436 main.go:141] libmachine: (old-k8s-version-968094)       <target dev='hdc' bus='scsi'/>
	I0314 19:14:15.504076  988436 main.go:141] libmachine: (old-k8s-version-968094)       <readonly/>
	I0314 19:14:15.504087  988436 main.go:141] libmachine: (old-k8s-version-968094)     </disk>
	I0314 19:14:15.504150  988436 main.go:141] libmachine: (old-k8s-version-968094)     <disk type='file' device='disk'>
	I0314 19:14:15.504178  988436 main.go:141] libmachine: (old-k8s-version-968094)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 19:14:15.504193  988436 main.go:141] libmachine: (old-k8s-version-968094)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/old-k8s-version-968094.rawdisk'/>
	I0314 19:14:15.504202  988436 main.go:141] libmachine: (old-k8s-version-968094)       <target dev='hda' bus='virtio'/>
	I0314 19:14:15.504234  988436 main.go:141] libmachine: (old-k8s-version-968094)     </disk>
	I0314 19:14:15.504250  988436 main.go:141] libmachine: (old-k8s-version-968094)     <interface type='network'>
	I0314 19:14:15.504265  988436 main.go:141] libmachine: (old-k8s-version-968094)       <source network='mk-old-k8s-version-968094'/>
	I0314 19:14:15.504275  988436 main.go:141] libmachine: (old-k8s-version-968094)       <model type='virtio'/>
	I0314 19:14:15.504281  988436 main.go:141] libmachine: (old-k8s-version-968094)     </interface>
	I0314 19:14:15.504289  988436 main.go:141] libmachine: (old-k8s-version-968094)     <interface type='network'>
	I0314 19:14:15.504296  988436 main.go:141] libmachine: (old-k8s-version-968094)       <source network='default'/>
	I0314 19:14:15.504304  988436 main.go:141] libmachine: (old-k8s-version-968094)       <model type='virtio'/>
	I0314 19:14:15.504314  988436 main.go:141] libmachine: (old-k8s-version-968094)     </interface>
	I0314 19:14:15.504326  988436 main.go:141] libmachine: (old-k8s-version-968094)     <serial type='pty'>
	I0314 19:14:15.504337  988436 main.go:141] libmachine: (old-k8s-version-968094)       <target port='0'/>
	I0314 19:14:15.504350  988436 main.go:141] libmachine: (old-k8s-version-968094)     </serial>
	I0314 19:14:15.504360  988436 main.go:141] libmachine: (old-k8s-version-968094)     <console type='pty'>
	I0314 19:14:15.504366  988436 main.go:141] libmachine: (old-k8s-version-968094)       <target type='serial' port='0'/>
	I0314 19:14:15.504373  988436 main.go:141] libmachine: (old-k8s-version-968094)     </console>
	I0314 19:14:15.504379  988436 main.go:141] libmachine: (old-k8s-version-968094)     <rng model='virtio'>
	I0314 19:14:15.504387  988436 main.go:141] libmachine: (old-k8s-version-968094)       <backend model='random'>/dev/random</backend>
	I0314 19:14:15.504394  988436 main.go:141] libmachine: (old-k8s-version-968094)     </rng>
	I0314 19:14:15.504405  988436 main.go:141] libmachine: (old-k8s-version-968094)     
	I0314 19:14:15.504419  988436 main.go:141] libmachine: (old-k8s-version-968094)     
	I0314 19:14:15.504434  988436 main.go:141] libmachine: (old-k8s-version-968094)   </devices>
	I0314 19:14:15.504472  988436 main.go:141] libmachine: (old-k8s-version-968094) </domain>
	I0314 19:14:15.504497  988436 main.go:141] libmachine: (old-k8s-version-968094) 
	I0314 19:14:15.509914  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:11:fb:7b in network default
	I0314 19:14:15.510536  988436 main.go:141] libmachine: (old-k8s-version-968094) Ensuring networks are active...
	I0314 19:14:15.510563  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:15.511299  988436 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network default is active
	I0314 19:14:15.511673  988436 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network mk-old-k8s-version-968094 is active
	I0314 19:14:15.512297  988436 main.go:141] libmachine: (old-k8s-version-968094) Getting domain xml...
	I0314 19:14:15.513015  988436 main.go:141] libmachine: (old-k8s-version-968094) Creating domain...
	I0314 19:14:16.838508  988436 main.go:141] libmachine: (old-k8s-version-968094) Waiting to get IP...
	I0314 19:14:16.839093  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:16.839434  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:16.839465  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:16.839422  988720 retry.go:31] will retry after 280.866203ms: waiting for machine to come up
	I0314 19:14:17.122181  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:17.123635  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:17.123664  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:17.123570  988720 retry.go:31] will retry after 276.458753ms: waiting for machine to come up
	I0314 19:14:17.785603  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:17.786111  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:17.786168  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:17.786093  988720 retry.go:31] will retry after 389.166315ms: waiting for machine to come up
	I0314 19:14:18.176561  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:18.177070  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:18.177106  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:18.177023  988720 retry.go:31] will retry after 380.752529ms: waiting for machine to come up
	I0314 19:14:18.559815  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:18.560949  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:18.560983  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:18.560890  988720 retry.go:31] will retry after 727.786586ms: waiting for machine to come up
	I0314 19:14:17.996976  988919 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 19:14:17.997087  988919 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/config.json ...
	I0314 19:14:17.997111  988919 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/config.json: {Name:mk229e72467b53da75b07e4ad0398f717b0fadd1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:17.997264  988919 cache.go:107] acquiring lock: {Name:mkb8329d2f4f18529459f0b9820b309b9f56457c Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:14:17.997281  988919 cache.go:107] acquiring lock: {Name:mke803f28b337120a839ee4fed1d39b37a9d1b7d Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:14:17.997290  988919 cache.go:107] acquiring lock: {Name:mk29e9d56915e825fb898258b1edd37c60e75d5b Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:14:17.997323  988919 cache.go:107] acquiring lock: {Name:mk29cc243f13caace2592255d07589cce8a9c50f Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:14:17.997364  988919 cache.go:115] /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 exists
	I0314 19:14:17.997352  988919 cache.go:107] acquiring lock: {Name:mke58c2afb2bdb6303249673713df438af9d7247 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:14:17.997332  988919 cache.go:107] acquiring lock: {Name:mkd20906efea9b042695c39fc3a669117d61d2be Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:14:17.997450  988919 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:14:17.997461  988919 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:14:17.997507  988919 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:14:17.997517  988919 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:14:17.997537  988919 cache.go:107] acquiring lock: {Name:mk1bb6af34ca5cc4f876ec4e2c5d37a81cc16939 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:14:17.997604  988919 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 19:14:17.997378  988919 cache.go:96] cache image "gcr.io/k8s-minikube/storage-provisioner:v5" -> "/home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5" took 122.435µs
	I0314 19:14:17.997611  988919 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:14:17.997744  988919 cache.go:80] save to tar file gcr.io/k8s-minikube/storage-provisioner:v5 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 succeeded
	I0314 19:14:17.997290  988919 start.go:360] acquireMachinesLock for no-preload-731976: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:14:17.997759  988919 cache.go:107] acquiring lock: {Name:mk8a7daa6430d81890efc2b6f30311c11301df49 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:14:17.997880  988919 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:14:17.999320  988919 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:14:17.999536  988919 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:14:17.999585  988919 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:14:17.999882  988919 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:14:18.000066  988919 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 19:14:18.000534  988919 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:14:18.000589  988919 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:14:18.151364  988919 cache.go:162] opening:  /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 19:14:18.153957  988919 cache.go:162] opening:  /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 19:14:18.155615  988919 cache.go:162] opening:  /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 19:14:18.158717  988919 cache.go:162] opening:  /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9
	I0314 19:14:18.169137  988919 cache.go:162] opening:  /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 19:14:18.171429  988919 cache.go:162] opening:  /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 19:14:18.233705  988919 cache.go:157] /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 exists
	I0314 19:14:18.233729  988919 cache.go:96] cache image "registry.k8s.io/pause:3.9" -> "/home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9" took 236.432882ms
	I0314 19:14:18.233741  988919 cache.go:80] save to tar file registry.k8s.io/pause:3.9 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.9 succeeded
	I0314 19:14:18.281199  988919 cache.go:162] opening:  /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 19:14:18.768833  988919 cache.go:157] /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 exists
	I0314 19:14:18.768869  988919 cache.go:96] cache image "registry.k8s.io/kube-proxy:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2" took 771.112384ms
	I0314 19:14:18.768887  988919 cache.go:80] save to tar file registry.k8s.io/kube-proxy:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 succeeded
	I0314 19:14:19.438405  988919 cache.go:157] /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 exists
	I0314 19:14:19.438434  988919 cache.go:96] cache image "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2" took 1.441115505s
	I0314 19:14:19.438450  988919 cache.go:80] save to tar file registry.k8s.io/kube-scheduler:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 succeeded
	I0314 19:14:19.592198  988919 cache.go:157] /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 exists
	I0314 19:14:19.592247  988919 cache.go:96] cache image "registry.k8s.io/coredns/coredns:v1.11.1" -> "/home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1" took 1.594710419s
	I0314 19:14:19.592259  988919 cache.go:80] save to tar file registry.k8s.io/coredns/coredns:v1.11.1 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 succeeded
	I0314 19:14:19.730747  988919 cache.go:157] /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 exists
	I0314 19:14:19.730780  988919 cache.go:96] cache image "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2" took 1.733513694s
	I0314 19:14:19.730802  988919 cache.go:80] save to tar file registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 succeeded
	I0314 19:14:19.845750  988919 cache.go:157] /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 exists
	I0314 19:14:19.845780  988919 cache.go:96] cache image "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" -> "/home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2" took 1.848521318s
	I0314 19:14:19.845795  988919 cache.go:80] save to tar file registry.k8s.io/kube-apiserver:v1.29.0-rc.2 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 succeeded
	I0314 19:14:20.087020  988919 cache.go:157] /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 exists
	I0314 19:14:20.087052  988919 cache.go:96] cache image "registry.k8s.io/etcd:3.5.10-0" -> "/home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0" took 2.08969975s
	I0314 19:14:20.087066  988919 cache.go:80] save to tar file registry.k8s.io/etcd:3.5.10-0 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 succeeded
	I0314 19:14:20.087088  988919 cache.go:87] Successfully saved all images to host disk.
	I0314 19:14:19.290657  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:19.291148  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:19.291182  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:19.291092  988720 retry.go:31] will retry after 821.899642ms: waiting for machine to come up
	I0314 19:14:20.114550  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:20.115072  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:20.115105  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:20.115013  988720 retry.go:31] will retry after 966.170572ms: waiting for machine to come up
	I0314 19:14:21.083182  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:21.083774  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:21.083801  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:21.083727  988720 retry.go:31] will retry after 1.076047079s: waiting for machine to come up
	I0314 19:14:22.161652  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:22.162151  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:22.162182  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:22.162089  988720 retry.go:31] will retry after 1.501351238s: waiting for machine to come up
	I0314 19:14:23.665996  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:23.666527  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:23.666563  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:23.666483  988720 retry.go:31] will retry after 1.978163759s: waiting for machine to come up
	I0314 19:14:25.646944  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:25.647516  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:25.647545  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:25.647467  988720 retry.go:31] will retry after 2.901646032s: waiting for machine to come up
	I0314 19:14:28.552868  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:28.553474  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:28.553508  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:28.553406  988720 retry.go:31] will retry after 3.528157438s: waiting for machine to come up
	I0314 19:14:32.084474  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:32.084898  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:32.084931  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:32.084846  988720 retry.go:31] will retry after 4.015658963s: waiting for machine to come up
	I0314 19:14:36.102233  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:36.102603  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:36.102632  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:36.102545  988720 retry.go:31] will retry after 5.130164013s: waiting for machine to come up
	I0314 19:14:42.897462  988684 start.go:364] duration metric: took 29.662484501s to acquireMachinesLock for "kubernetes-upgrade-097195"
	I0314 19:14:42.897519  988684 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:14:42.897527  988684 fix.go:54] fixHost starting: 
	I0314 19:14:42.898014  988684 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:14:42.898067  988684 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:14:42.915470  988684 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45917
	I0314 19:14:42.915906  988684 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:14:42.916498  988684 main.go:141] libmachine: Using API Version  1
	I0314 19:14:42.916527  988684 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:14:42.916940  988684 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:14:42.917175  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:14:42.917344  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetState
	I0314 19:14:42.918974  988684 fix.go:112] recreateIfNeeded on kubernetes-upgrade-097195: state=Running err=<nil>
	W0314 19:14:42.918997  988684 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:14:42.921077  988684 out.go:177] * Updating the running kvm2 "kubernetes-upgrade-097195" VM ...
	I0314 19:14:42.922383  988684 machine.go:94] provisionDockerMachine start ...
	I0314 19:14:42.922410  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:14:42.922614  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:14:42.925628  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:42.926142  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:42.926178  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:42.926328  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:14:42.926484  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:42.926987  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:42.929003  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:14:42.929206  988684 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:42.929477  988684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0314 19:14:42.929503  988684 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:14:43.055009  988684 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-097195
	
	I0314 19:14:43.055042  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetMachineName
	I0314 19:14:43.055329  988684 buildroot.go:166] provisioning hostname "kubernetes-upgrade-097195"
	I0314 19:14:43.055359  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetMachineName
	I0314 19:14:43.055584  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:14:43.058631  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.059101  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:43.059129  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.059261  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:14:43.059472  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:43.059639  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:43.059801  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:14:43.059965  988684 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:43.060164  988684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0314 19:14:43.060182  988684 main.go:141] libmachine: About to run SSH command:
	sudo hostname kubernetes-upgrade-097195 && echo "kubernetes-upgrade-097195" | sudo tee /etc/hostname
	I0314 19:14:41.233833  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.234521  988436 main.go:141] libmachine: (old-k8s-version-968094) Found IP for machine: 192.168.72.211
	I0314 19:14:41.234548  988436 main.go:141] libmachine: (old-k8s-version-968094) Reserving static IP address...
	I0314 19:14:41.234559  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has current primary IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.234918  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"} in network mk-old-k8s-version-968094
	I0314 19:14:41.311619  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Getting to WaitForSSH function...
	I0314 19:14:41.311651  988436 main.go:141] libmachine: (old-k8s-version-968094) Reserved static IP address: 192.168.72.211
	I0314 19:14:41.311664  988436 main.go:141] libmachine: (old-k8s-version-968094) Waiting for SSH to be available...
	I0314 19:14:41.314272  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.314865  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.314905  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.315027  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH client type: external
	I0314 19:14:41.315064  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa (-rw-------)
	I0314 19:14:41.315099  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:14:41.315121  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | About to run SSH command:
	I0314 19:14:41.315135  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | exit 0
	I0314 19:14:41.440131  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | SSH cmd err, output: <nil>: 
	I0314 19:14:41.440410  988436 main.go:141] libmachine: (old-k8s-version-968094) KVM machine creation complete!
	I0314 19:14:41.440739  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:14:41.441338  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:41.441540  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:41.441683  988436 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 19:14:41.441703  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetState
	I0314 19:14:41.443149  988436 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 19:14:41.443165  988436 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 19:14:41.443173  988436 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 19:14:41.443179  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:41.445819  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.446175  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.446211  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.446343  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:41.446536  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.446702  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.446836  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:41.447031  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:41.447292  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:41.447309  988436 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 19:14:41.555625  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:14:41.555651  988436 main.go:141] libmachine: Detecting the provisioner...
	I0314 19:14:41.555661  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:41.558680  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.559201  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.559234  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.559380  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:41.559603  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.559772  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.559895  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:41.560034  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:41.560224  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:41.560241  988436 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 19:14:41.669043  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 19:14:41.669119  988436 main.go:141] libmachine: found compatible host: buildroot
	I0314 19:14:41.669132  988436 main.go:141] libmachine: Provisioning with buildroot...
	I0314 19:14:41.669141  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:14:41.669386  988436 buildroot.go:166] provisioning hostname "old-k8s-version-968094"
	I0314 19:14:41.669423  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:14:41.669617  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:41.672459  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.672874  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.672902  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.673013  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:41.673213  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.673399  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.673528  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:41.673671  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:41.673846  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:41.673859  988436 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-968094 && echo "old-k8s-version-968094" | sudo tee /etc/hostname
	I0314 19:14:41.802301  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-968094
	
	I0314 19:14:41.802339  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:41.806468  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.806904  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.806938  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.807147  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:41.807366  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.807517  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.807661  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:41.807812  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:41.808045  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:41.808067  988436 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-968094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-968094/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-968094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:14:41.926611  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:14:41.926646  988436 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:14:41.926706  988436 buildroot.go:174] setting up certificates
	I0314 19:14:41.926720  988436 provision.go:84] configureAuth start
	I0314 19:14:41.926737  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:14:41.927071  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:14:41.929859  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.930239  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.930276  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.930391  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:41.932830  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.933165  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.933194  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.933295  988436 provision.go:143] copyHostCerts
	I0314 19:14:41.933367  988436 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:14:41.933378  988436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:14:41.933440  988436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:14:41.933566  988436 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:14:41.933579  988436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:14:41.933612  988436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:14:41.933689  988436 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:14:41.933699  988436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:14:41.933726  988436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:14:41.933797  988436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-968094 san=[127.0.0.1 192.168.72.211 localhost minikube old-k8s-version-968094]
	I0314 19:14:42.182763  988436 provision.go:177] copyRemoteCerts
	I0314 19:14:42.182826  988436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:14:42.182857  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.185712  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.186035  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.186066  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.186285  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.186534  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.186705  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.186852  988436 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:14:42.270889  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 19:14:42.298651  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:14:42.325621  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:14:42.352639  988436 provision.go:87] duration metric: took 425.90336ms to configureAuth
	I0314 19:14:42.352669  988436 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:14:42.352875  988436 config.go:182] Loaded profile config "old-k8s-version-968094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:14:42.352965  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.355723  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.356124  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.356146  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.356341  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.356546  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.356722  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.356840  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.357004  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:42.357172  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:42.357189  988436 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:14:42.642868  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:14:42.642901  988436 main.go:141] libmachine: Checking connection to Docker...
	I0314 19:14:42.642912  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetURL
	I0314 19:14:42.644372  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using libvirt version 6000000
	I0314 19:14:42.647311  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.647702  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.647731  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.647954  988436 main.go:141] libmachine: Docker is up and running!
	I0314 19:14:42.647972  988436 main.go:141] libmachine: Reticulating splines...
	I0314 19:14:42.647981  988436 client.go:171] duration metric: took 27.648891803s to LocalClient.Create
	I0314 19:14:42.648014  988436 start.go:167] duration metric: took 27.648964706s to libmachine.API.Create "old-k8s-version-968094"
	I0314 19:14:42.648041  988436 start.go:293] postStartSetup for "old-k8s-version-968094" (driver="kvm2")
	I0314 19:14:42.648055  988436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:14:42.648075  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:42.648374  988436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:14:42.648405  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.650521  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.650849  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.650879  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.650950  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.651149  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.651350  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.651499  988436 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:14:42.735116  988436 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:14:42.740327  988436 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:14:42.740353  988436 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:14:42.740417  988436 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:14:42.740498  988436 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:14:42.740607  988436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:14:42.750455  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:14:42.777266  988436 start.go:296] duration metric: took 129.209637ms for postStartSetup
	I0314 19:14:42.777327  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:14:42.778098  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:14:42.781186  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.781563  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.781594  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.781820  988436 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json ...
	I0314 19:14:42.782042  988436 start.go:128] duration metric: took 27.804406667s to createHost
	I0314 19:14:42.782072  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.784251  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.784637  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.784668  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.784784  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.785002  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.785154  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.785274  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.785446  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:42.785606  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:42.785617  988436 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:14:42.897283  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710443682.880012035
	
	I0314 19:14:42.897308  988436 fix.go:216] guest clock: 1710443682.880012035
	I0314 19:14:42.897317  988436 fix.go:229] Guest: 2024-03-14 19:14:42.880012035 +0000 UTC Remote: 2024-03-14 19:14:42.78205842 +0000 UTC m=+48.630554959 (delta=97.953615ms)
	I0314 19:14:42.897367  988436 fix.go:200] guest clock delta is within tolerance: 97.953615ms
	I0314 19:14:42.897374  988436 start.go:83] releasing machines lock for "old-k8s-version-968094", held for 27.91994048s
	I0314 19:14:42.897416  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:42.897723  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:14:42.900723  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.901123  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.901151  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.901341  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:42.901889  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:42.902093  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:42.902180  988436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:14:42.902248  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.902358  988436 ssh_runner.go:195] Run: cat /version.json
	I0314 19:14:42.902387  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.905112  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.905330  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.905535  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.905578  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.905671  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.905808  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.905848  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.905853  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.906019  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.906067  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.906174  988436 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:14:42.906247  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.906407  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.906529  988436 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:14:42.990403  988436 ssh_runner.go:195] Run: systemctl --version
	I0314 19:14:43.019894  988436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:14:43.181275  988436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:14:43.188411  988436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:14:43.188469  988436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:14:43.207426  988436 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:14:43.207460  988436 start.go:494] detecting cgroup driver to use...
	I0314 19:14:43.207540  988436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:14:43.227408  988436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:14:43.244322  988436 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:14:43.244390  988436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:14:43.259914  988436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:14:43.274439  988436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:14:43.400707  988436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:14:43.553766  988436 docker.go:233] disabling docker service ...
	I0314 19:14:43.553836  988436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:14:43.573799  988436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:14:43.592832  988436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:14:43.747673  988436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:14:43.893550  988436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:14:43.911407  988436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:14:43.932872  988436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 19:14:43.932936  988436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:43.945499  988436 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:14:43.945550  988436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:43.958055  988436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:43.970771  988436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:43.983243  988436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:14:43.995576  988436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:14:44.006455  988436 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:14:44.006510  988436 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:14:44.021079  988436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:14:44.033636  988436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:14:44.197240  988436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:14:44.339817  988436 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:14:44.339902  988436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:14:44.345568  988436 start.go:562] Will wait 60s for crictl version
	I0314 19:14:44.345619  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:44.350149  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:14:44.396503  988436 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:14:44.396591  988436 ssh_runner.go:195] Run: crio --version
	I0314 19:14:44.430497  988436 ssh_runner.go:195] Run: crio --version
	I0314 19:14:44.461214  988436 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 19:14:43.203123  988684 main.go:141] libmachine: SSH cmd err, output: <nil>: kubernetes-upgrade-097195
	
	I0314 19:14:43.203161  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:14:43.206153  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.206610  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:43.206647  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.206845  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:14:43.207068  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:43.207276  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:43.207481  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:14:43.207714  988684 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:43.207961  988684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0314 19:14:43.207991  988684 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\skubernetes-upgrade-097195' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 kubernetes-upgrade-097195/g' /etc/hosts;
				else 
					echo '127.0.1.1 kubernetes-upgrade-097195' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:14:43.326426  988684 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:14:43.326463  988684 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:14:43.326490  988684 buildroot.go:174] setting up certificates
	I0314 19:14:43.326504  988684 provision.go:84] configureAuth start
	I0314 19:14:43.326523  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetMachineName
	I0314 19:14:43.326853  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetIP
	I0314 19:14:43.329943  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.330350  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:43.330393  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.330501  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:14:43.332962  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.333401  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:43.333434  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.333594  988684 provision.go:143] copyHostCerts
	I0314 19:14:43.333666  988684 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:14:43.333681  988684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:14:43.333751  988684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:14:43.333870  988684 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:14:43.333893  988684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:14:43.333938  988684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:14:43.334022  988684 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:14:43.334033  988684 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:14:43.334062  988684 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:14:43.334126  988684 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.kubernetes-upgrade-097195 san=[127.0.0.1 192.168.50.124 kubernetes-upgrade-097195 localhost minikube]
	I0314 19:14:43.450369  988684 provision.go:177] copyRemoteCerts
	I0314 19:14:43.450454  988684 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:14:43.450492  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:14:43.453612  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.454021  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:43.454053  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.454256  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:14:43.454421  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:43.454620  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:14:43.454761  988684 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa Username:docker}
	I0314 19:14:43.546580  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:14:43.582568  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1241 bytes)
	I0314 19:14:43.642112  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:14:43.676985  988684 provision.go:87] duration metric: took 350.459295ms to configureAuth
	I0314 19:14:43.677017  988684 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:14:43.677338  988684 config.go:182] Loaded profile config "kubernetes-upgrade-097195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:14:43.677466  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:14:43.680405  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.680899  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:43.680933  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:43.681187  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:14:43.681465  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:43.681689  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:43.681876  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:14:43.682079  988684 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:43.682295  988684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0314 19:14:43.682319  988684 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:14:44.462594  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:14:44.465246  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:44.465634  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:44.465666  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:44.465826  988436 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 19:14:44.470671  988436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:14:44.484305  988436 kubeadm.go:877] updating cluster {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:14:44.484460  988436 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:14:44.484509  988436 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:14:44.516937  988436 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:14:44.517028  988436 ssh_runner.go:195] Run: which lz4
	I0314 19:14:44.521485  988436 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:14:44.526625  988436 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:14:44.526663  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 19:14:46.456066  988436 crio.go:444] duration metric: took 1.93461575s to copy over tarball
	I0314 19:14:46.456155  988436 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:14:52.490421  988919 start.go:364] duration metric: took 34.492634606s to acquireMachinesLock for "no-preload-731976"
	I0314 19:14:52.490484  988919 start.go:93] Provisioning new machine with config: &{Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOption
s:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:14:52.490635  988919 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 19:14:49.494219  988436 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.038019069s)
	I0314 19:14:49.494250  988436 crio.go:451] duration metric: took 3.038150607s to extract the tarball
	I0314 19:14:49.494258  988436 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:14:49.539603  988436 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:14:49.599773  988436 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:14:49.599810  988436 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:14:49.599941  988436 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 19:14:49.599966  988436 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:14:49.599924  988436 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:14:49.599981  988436 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:14:49.599911  988436 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:14:49.600074  988436 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 19:14:49.600089  988436 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:14:49.599915  988436 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:14:49.601714  988436 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:14:49.601723  988436 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:14:49.601724  988436 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 19:14:49.601742  988436 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:14:49.601714  988436 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 19:14:49.601714  988436 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:14:49.601714  988436 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:14:49.602122  988436 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:14:49.749995  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 19:14:49.749995  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 19:14:49.760203  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:14:49.761697  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:14:49.777652  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:14:49.797979  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:14:49.819214  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 19:14:49.865166  988436 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 19:14:49.865239  988436 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:14:49.865275  988436 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 19:14:49.865304  988436 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 19:14:49.865319  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:49.865332  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:49.922233  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:14:49.942607  988436 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 19:14:49.942662  988436 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:14:49.942720  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:49.954508  988436 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 19:14:49.954619  988436 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:14:49.954671  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:49.954534  988436 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 19:14:49.954702  988436 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:14:49.954751  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:50.013099  988436 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 19:14:50.013154  988436 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:14:50.013171  988436 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 19:14:50.013205  988436 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 19:14:50.013209  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:50.013228  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 19:14:50.013243  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:50.013327  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 19:14:50.149874  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:14:50.149946  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:14:50.149985  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:14:50.150036  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:14:50.150086  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 19:14:50.150159  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 19:14:50.150203  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 19:14:50.235991  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 19:14:50.259400  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 19:14:50.266436  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 19:14:50.266551  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 19:14:50.266635  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 19:14:50.266695  988436 cache_images.go:92] duration metric: took 666.868199ms to LoadCachedImages
	W0314 19:14:50.266776  988436 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0314 19:14:50.266792  988436 kubeadm.go:928] updating node { 192.168.72.211 8443 v1.20.0 crio true true} ...
	I0314 19:14:50.266959  988436 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-968094 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:14:50.267049  988436 ssh_runner.go:195] Run: crio config
	I0314 19:14:50.319163  988436 cni.go:84] Creating CNI manager for ""
	I0314 19:14:50.319196  988436 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:14:50.319219  988436 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:14:50.319247  988436 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-968094 NodeName:old-k8s-version-968094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 19:14:50.319462  988436 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-968094"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:14:50.319554  988436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 19:14:50.332194  988436 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:14:50.332289  988436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:14:50.344452  988436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0314 19:14:50.365405  988436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:14:50.387571  988436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0314 19:14:50.406541  988436 ssh_runner.go:195] Run: grep 192.168.72.211	control-plane.minikube.internal$ /etc/hosts
	I0314 19:14:50.410989  988436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:14:50.425781  988436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:14:50.574672  988436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:14:50.595348  988436 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094 for IP: 192.168.72.211
	I0314 19:14:50.595393  988436 certs.go:194] generating shared ca certs ...
	I0314 19:14:50.595448  988436 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:50.595631  988436 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:14:50.595675  988436 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:14:50.595685  988436 certs.go:256] generating profile certs ...
	I0314 19:14:50.595751  988436 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key
	I0314 19:14:50.595766  988436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.crt with IP's: []
	I0314 19:14:50.961687  988436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.crt ...
	I0314 19:14:50.961725  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.crt: {Name:mk462f1a561c7e853d83f1337f12dd54e1b11a10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:50.961917  988436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key ...
	I0314 19:14:50.961937  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key: {Name:mk13cbe9bade4e0c0e1e8edb424f78800ee373b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:50.962043  988436 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff
	I0314 19:14:50.962064  988436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt.8692dcff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.211]
	I0314 19:14:51.207477  988436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt.8692dcff ...
	I0314 19:14:51.207520  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt.8692dcff: {Name:mkf44ad6b646a50ac6bf1e23895fadd371a28f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:51.209199  988436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff ...
	I0314 19:14:51.209232  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff: {Name:mkcc29eb399e964534b152a6b8e9d73e64611845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:51.209378  988436 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt.8692dcff -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt
	I0314 19:14:51.209493  988436 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key
	I0314 19:14:51.209583  988436 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key
	I0314 19:14:51.209610  988436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt with IP's: []
	I0314 19:14:51.305439  988436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt ...
	I0314 19:14:51.305472  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt: {Name:mk17654c8a0256296f6afb11e28f678027798ce1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:51.324122  988436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key ...
	I0314 19:14:51.324154  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key: {Name:mk04faffa245e09a31a95155ce70b6b328f12ffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:51.324471  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:14:51.324528  988436 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:14:51.324544  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:14:51.324575  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:14:51.324606  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:14:51.324637  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:14:51.324702  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:14:51.325477  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:14:51.354119  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:14:51.380946  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:14:51.411789  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:14:51.458353  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 19:14:51.483171  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:14:51.520147  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:14:51.548815  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:14:51.604141  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:14:51.632783  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:14:51.660399  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:14:51.687968  988436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:14:51.707887  988436 ssh_runner.go:195] Run: openssl version
	I0314 19:14:51.714430  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:14:51.726481  988436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:14:51.731724  988436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:14:51.731802  988436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:14:51.738051  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:14:51.749973  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:14:51.763098  988436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:14:51.768588  988436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:14:51.768668  988436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:14:51.775323  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:14:51.787855  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:14:51.800610  988436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:14:51.805963  988436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:14:51.806041  988436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:14:51.813261  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:14:51.827741  988436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:14:51.832613  988436 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:14:51.832672  988436 kubeadm.go:391] StartCluster: {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:14:51.832794  988436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:14:51.832852  988436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:14:51.878164  988436 cri.go:89] found id: ""
	I0314 19:14:51.878252  988436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 19:14:51.892635  988436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:14:51.903863  988436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:14:51.917871  988436 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:14:51.917895  988436 kubeadm.go:156] found existing configuration files:
	
	I0314 19:14:51.917936  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:14:51.931245  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:14:51.931311  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:14:51.944845  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:14:51.956603  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:14:51.956662  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:14:51.968586  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:14:51.981838  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:14:51.981904  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:14:51.995108  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:14:52.006432  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:14:52.006578  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:14:52.019652  988436 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:14:52.181954  988436 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:14:52.182145  988436 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:14:52.399528  988436 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:14:52.399727  988436 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:14:52.399901  988436 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:14:52.638472  988436 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:14:52.543459  988919 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 19:14:52.543720  988919 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:14:52.543776  988919 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:14:52.561036  988919 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44201
	I0314 19:14:52.561625  988919 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:14:52.562352  988919 main.go:141] libmachine: Using API Version  1
	I0314 19:14:52.562385  988919 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:14:52.562892  988919 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:14:52.563125  988919 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:14:52.563281  988919 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:14:52.563454  988919 start.go:159] libmachine.API.Create for "no-preload-731976" (driver="kvm2")
	I0314 19:14:52.563483  988919 client.go:168] LocalClient.Create starting
	I0314 19:14:52.563519  988919 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 19:14:52.563573  988919 main.go:141] libmachine: Decoding PEM data...
	I0314 19:14:52.563595  988919 main.go:141] libmachine: Parsing certificate...
	I0314 19:14:52.563701  988919 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 19:14:52.563732  988919 main.go:141] libmachine: Decoding PEM data...
	I0314 19:14:52.563749  988919 main.go:141] libmachine: Parsing certificate...
	I0314 19:14:52.563773  988919 main.go:141] libmachine: Running pre-create checks...
	I0314 19:14:52.563785  988919 main.go:141] libmachine: (no-preload-731976) Calling .PreCreateCheck
	I0314 19:14:52.564238  988919 main.go:141] libmachine: (no-preload-731976) Calling .GetConfigRaw
	I0314 19:14:52.564699  988919 main.go:141] libmachine: Creating machine...
	I0314 19:14:52.564720  988919 main.go:141] libmachine: (no-preload-731976) Calling .Create
	I0314 19:14:52.564901  988919 main.go:141] libmachine: (no-preload-731976) Creating KVM machine...
	I0314 19:14:52.566445  988919 main.go:141] libmachine: (no-preload-731976) DBG | found existing default KVM network
	I0314 19:14:52.568562  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:52.568393  989124 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00026e0d0}
	I0314 19:14:52.568591  988919 main.go:141] libmachine: (no-preload-731976) DBG | created network xml: 
	I0314 19:14:52.568605  988919 main.go:141] libmachine: (no-preload-731976) DBG | <network>
	I0314 19:14:52.568616  988919 main.go:141] libmachine: (no-preload-731976) DBG |   <name>mk-no-preload-731976</name>
	I0314 19:14:52.568626  988919 main.go:141] libmachine: (no-preload-731976) DBG |   <dns enable='no'/>
	I0314 19:14:52.568653  988919 main.go:141] libmachine: (no-preload-731976) DBG |   
	I0314 19:14:52.568665  988919 main.go:141] libmachine: (no-preload-731976) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0314 19:14:52.568677  988919 main.go:141] libmachine: (no-preload-731976) DBG |     <dhcp>
	I0314 19:14:52.568690  988919 main.go:141] libmachine: (no-preload-731976) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0314 19:14:52.568699  988919 main.go:141] libmachine: (no-preload-731976) DBG |     </dhcp>
	I0314 19:14:52.568705  988919 main.go:141] libmachine: (no-preload-731976) DBG |   </ip>
	I0314 19:14:52.568709  988919 main.go:141] libmachine: (no-preload-731976) DBG |   
	I0314 19:14:52.568714  988919 main.go:141] libmachine: (no-preload-731976) DBG | </network>
	I0314 19:14:52.568722  988919 main.go:141] libmachine: (no-preload-731976) DBG | 
	I0314 19:14:52.616720  988919 main.go:141] libmachine: (no-preload-731976) DBG | trying to create private KVM network mk-no-preload-731976 192.168.39.0/24...
	I0314 19:14:52.702089  988919 main.go:141] libmachine: (no-preload-731976) DBG | private KVM network mk-no-preload-731976 192.168.39.0/24 created
	I0314 19:14:52.702129  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:52.702023  989124 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:14:52.702142  988919 main.go:141] libmachine: (no-preload-731976) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976 ...
	I0314 19:14:52.702181  988919 main.go:141] libmachine: (no-preload-731976) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 19:14:52.702227  988919 main.go:141] libmachine: (no-preload-731976) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 19:14:52.215659  988684 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:14:52.215714  988684 machine.go:97] duration metric: took 9.293303517s to provisionDockerMachine
	I0314 19:14:52.215731  988684 start.go:293] postStartSetup for "kubernetes-upgrade-097195" (driver="kvm2")
	I0314 19:14:52.215748  988684 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:14:52.215770  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:14:52.216163  988684 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:14:52.216234  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:14:52.219702  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:52.220170  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:52.220233  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:52.220455  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:14:52.220671  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:52.220845  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:14:52.221030  988684 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa Username:docker}
	I0314 19:14:52.316659  988684 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:14:52.321699  988684 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:14:52.321729  988684 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:14:52.321811  988684 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:14:52.321941  988684 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:14:52.322077  988684 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:14:52.333059  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:14:52.364299  988684 start.go:296] duration metric: took 148.54734ms for postStartSetup
	I0314 19:14:52.364365  988684 fix.go:56] duration metric: took 9.466836421s for fixHost
	I0314 19:14:52.364400  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:14:52.367380  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:52.367713  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:52.367742  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:52.367962  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:14:52.368172  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:52.368346  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:52.368475  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:14:52.368688  988684 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:52.368880  988684 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.124 22 <nil> <nil>}
	I0314 19:14:52.368894  988684 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:14:52.490242  988684 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710443692.470479808
	
	I0314 19:14:52.490267  988684 fix.go:216] guest clock: 1710443692.470479808
	I0314 19:14:52.490274  988684 fix.go:229] Guest: 2024-03-14 19:14:52.470479808 +0000 UTC Remote: 2024-03-14 19:14:52.364372123 +0000 UTC m=+39.279812379 (delta=106.107685ms)
	I0314 19:14:52.490324  988684 fix.go:200] guest clock delta is within tolerance: 106.107685ms
	I0314 19:14:52.490333  988684 start.go:83] releasing machines lock for "kubernetes-upgrade-097195", held for 9.592838263s
	I0314 19:14:52.490363  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:14:52.490690  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetIP
	I0314 19:14:52.493973  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:52.494415  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:52.494467  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:52.494611  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:14:52.495172  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:14:52.495389  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .DriverName
	I0314 19:14:52.495477  988684 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:14:52.495536  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:14:52.495645  988684 ssh_runner.go:195] Run: cat /version.json
	I0314 19:14:52.495674  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHHostname
	I0314 19:14:52.498300  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:52.498593  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:52.498761  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:52.498790  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:52.498991  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:14:52.499090  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:14:52.499118  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:14:52.499184  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:52.499276  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHPort
	I0314 19:14:52.499356  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:14:52.499431  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHKeyPath
	I0314 19:14:52.499507  988684 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa Username:docker}
	I0314 19:14:52.499534  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetSSHUsername
	I0314 19:14:52.499692  988684 sshutil.go:53] new ssh client: &{IP:192.168.50.124 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/kubernetes-upgrade-097195/id_rsa Username:docker}
	I0314 19:14:52.588327  988684 ssh_runner.go:195] Run: systemctl --version
	I0314 19:14:52.625808  988684 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:14:52.846082  988684 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:14:52.898128  988684 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:14:52.898215  988684 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:14:52.980196  988684 cni.go:259] no active bridge cni configs found in "/etc/cni/net.d" - nothing to disable
	I0314 19:14:52.980253  988684 start.go:494] detecting cgroup driver to use...
	I0314 19:14:52.980350  988684 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:14:53.115314  988684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:14:52.640883  988436 out.go:204]   - Generating certificates and keys ...
	I0314 19:14:52.640989  988436 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:14:52.641067  988436 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:14:52.764665  988436 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 19:14:52.875694  988436 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 19:14:53.088130  988436 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 19:14:53.299437  988436 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 19:14:53.592735  988436 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 19:14:53.593022  988436 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-968094] and IPs [192.168.72.211 127.0.0.1 ::1]
	I0314 19:14:53.727136  988436 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 19:14:53.727390  988436 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-968094] and IPs [192.168.72.211 127.0.0.1 ::1]
	I0314 19:14:53.898136  988436 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 19:14:53.986921  988436 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 19:14:54.116669  988436 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 19:14:54.117618  988436 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:14:54.265449  988436 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:14:54.426924  988436 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:14:54.691508  988436 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:14:54.890034  988436 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:14:54.919278  988436 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:14:54.919408  988436 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:14:54.920348  988436 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:14:55.126092  988436 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:14:52.989894  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:52.989740  989124 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa...
	I0314 19:14:53.437718  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:53.437556  989124 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/no-preload-731976.rawdisk...
	I0314 19:14:53.437757  988919 main.go:141] libmachine: (no-preload-731976) DBG | Writing magic tar header
	I0314 19:14:53.437776  988919 main.go:141] libmachine: (no-preload-731976) DBG | Writing SSH key tar header
	I0314 19:14:53.437789  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:53.437691  989124 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976 ...
	I0314 19:14:53.437807  988919 main.go:141] libmachine: (no-preload-731976) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976
	I0314 19:14:53.437882  988919 main.go:141] libmachine: (no-preload-731976) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976 (perms=drwx------)
	I0314 19:14:53.437920  988919 main.go:141] libmachine: (no-preload-731976) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 19:14:53.437932  988919 main.go:141] libmachine: (no-preload-731976) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 19:14:53.437950  988919 main.go:141] libmachine: (no-preload-731976) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:14:53.437960  988919 main.go:141] libmachine: (no-preload-731976) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 19:14:53.437975  988919 main.go:141] libmachine: (no-preload-731976) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 19:14:53.437983  988919 main.go:141] libmachine: (no-preload-731976) DBG | Checking permissions on dir: /home/jenkins
	I0314 19:14:53.437994  988919 main.go:141] libmachine: (no-preload-731976) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 19:14:53.438004  988919 main.go:141] libmachine: (no-preload-731976) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 19:14:53.438016  988919 main.go:141] libmachine: (no-preload-731976) DBG | Checking permissions on dir: /home
	I0314 19:14:53.438031  988919 main.go:141] libmachine: (no-preload-731976) DBG | Skipping /home - not owner
	I0314 19:14:53.438045  988919 main.go:141] libmachine: (no-preload-731976) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 19:14:53.438054  988919 main.go:141] libmachine: (no-preload-731976) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 19:14:53.438083  988919 main.go:141] libmachine: (no-preload-731976) Creating domain...
	I0314 19:14:53.439547  988919 main.go:141] libmachine: (no-preload-731976) define libvirt domain using xml: 
	I0314 19:14:53.439570  988919 main.go:141] libmachine: (no-preload-731976) <domain type='kvm'>
	I0314 19:14:53.439581  988919 main.go:141] libmachine: (no-preload-731976)   <name>no-preload-731976</name>
	I0314 19:14:53.439590  988919 main.go:141] libmachine: (no-preload-731976)   <memory unit='MiB'>2200</memory>
	I0314 19:14:53.439598  988919 main.go:141] libmachine: (no-preload-731976)   <vcpu>2</vcpu>
	I0314 19:14:53.439606  988919 main.go:141] libmachine: (no-preload-731976)   <features>
	I0314 19:14:53.439614  988919 main.go:141] libmachine: (no-preload-731976)     <acpi/>
	I0314 19:14:53.439620  988919 main.go:141] libmachine: (no-preload-731976)     <apic/>
	I0314 19:14:53.439628  988919 main.go:141] libmachine: (no-preload-731976)     <pae/>
	I0314 19:14:53.439650  988919 main.go:141] libmachine: (no-preload-731976)     
	I0314 19:14:53.439659  988919 main.go:141] libmachine: (no-preload-731976)   </features>
	I0314 19:14:53.439667  988919 main.go:141] libmachine: (no-preload-731976)   <cpu mode='host-passthrough'>
	I0314 19:14:53.439675  988919 main.go:141] libmachine: (no-preload-731976)   
	I0314 19:14:53.439682  988919 main.go:141] libmachine: (no-preload-731976)   </cpu>
	I0314 19:14:53.439690  988919 main.go:141] libmachine: (no-preload-731976)   <os>
	I0314 19:14:53.439697  988919 main.go:141] libmachine: (no-preload-731976)     <type>hvm</type>
	I0314 19:14:53.439705  988919 main.go:141] libmachine: (no-preload-731976)     <boot dev='cdrom'/>
	I0314 19:14:53.439712  988919 main.go:141] libmachine: (no-preload-731976)     <boot dev='hd'/>
	I0314 19:14:53.439721  988919 main.go:141] libmachine: (no-preload-731976)     <bootmenu enable='no'/>
	I0314 19:14:53.439731  988919 main.go:141] libmachine: (no-preload-731976)   </os>
	I0314 19:14:53.439740  988919 main.go:141] libmachine: (no-preload-731976)   <devices>
	I0314 19:14:53.439748  988919 main.go:141] libmachine: (no-preload-731976)     <disk type='file' device='cdrom'>
	I0314 19:14:53.439761  988919 main.go:141] libmachine: (no-preload-731976)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/boot2docker.iso'/>
	I0314 19:14:53.439769  988919 main.go:141] libmachine: (no-preload-731976)       <target dev='hdc' bus='scsi'/>
	I0314 19:14:53.439778  988919 main.go:141] libmachine: (no-preload-731976)       <readonly/>
	I0314 19:14:53.439784  988919 main.go:141] libmachine: (no-preload-731976)     </disk>
	I0314 19:14:53.439795  988919 main.go:141] libmachine: (no-preload-731976)     <disk type='file' device='disk'>
	I0314 19:14:53.439805  988919 main.go:141] libmachine: (no-preload-731976)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 19:14:53.439823  988919 main.go:141] libmachine: (no-preload-731976)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/no-preload-731976.rawdisk'/>
	I0314 19:14:53.439833  988919 main.go:141] libmachine: (no-preload-731976)       <target dev='hda' bus='virtio'/>
	I0314 19:14:53.439849  988919 main.go:141] libmachine: (no-preload-731976)     </disk>
	I0314 19:14:53.439858  988919 main.go:141] libmachine: (no-preload-731976)     <interface type='network'>
	I0314 19:14:53.439868  988919 main.go:141] libmachine: (no-preload-731976)       <source network='mk-no-preload-731976'/>
	I0314 19:14:53.439885  988919 main.go:141] libmachine: (no-preload-731976)       <model type='virtio'/>
	I0314 19:14:53.439895  988919 main.go:141] libmachine: (no-preload-731976)     </interface>
	I0314 19:14:53.439908  988919 main.go:141] libmachine: (no-preload-731976)     <interface type='network'>
	I0314 19:14:53.439918  988919 main.go:141] libmachine: (no-preload-731976)       <source network='default'/>
	I0314 19:14:53.439928  988919 main.go:141] libmachine: (no-preload-731976)       <model type='virtio'/>
	I0314 19:14:53.439937  988919 main.go:141] libmachine: (no-preload-731976)     </interface>
	I0314 19:14:53.439948  988919 main.go:141] libmachine: (no-preload-731976)     <serial type='pty'>
	I0314 19:14:53.439959  988919 main.go:141] libmachine: (no-preload-731976)       <target port='0'/>
	I0314 19:14:53.439969  988919 main.go:141] libmachine: (no-preload-731976)     </serial>
	I0314 19:14:53.439977  988919 main.go:141] libmachine: (no-preload-731976)     <console type='pty'>
	I0314 19:14:53.439989  988919 main.go:141] libmachine: (no-preload-731976)       <target type='serial' port='0'/>
	I0314 19:14:53.439998  988919 main.go:141] libmachine: (no-preload-731976)     </console>
	I0314 19:14:53.440005  988919 main.go:141] libmachine: (no-preload-731976)     <rng model='virtio'>
	I0314 19:14:53.440015  988919 main.go:141] libmachine: (no-preload-731976)       <backend model='random'>/dev/random</backend>
	I0314 19:14:53.440022  988919 main.go:141] libmachine: (no-preload-731976)     </rng>
	I0314 19:14:53.440031  988919 main.go:141] libmachine: (no-preload-731976)     
	I0314 19:14:53.440037  988919 main.go:141] libmachine: (no-preload-731976)     
	I0314 19:14:53.440046  988919 main.go:141] libmachine: (no-preload-731976)   </devices>
	I0314 19:14:53.440057  988919 main.go:141] libmachine: (no-preload-731976) </domain>
	I0314 19:14:53.440068  988919 main.go:141] libmachine: (no-preload-731976) 
	I0314 19:14:53.445354  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:27:4f:d0 in network default
	I0314 19:14:53.445970  988919 main.go:141] libmachine: (no-preload-731976) Ensuring networks are active...
	I0314 19:14:53.446003  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:14:53.446831  988919 main.go:141] libmachine: (no-preload-731976) Ensuring network default is active
	I0314 19:14:53.447309  988919 main.go:141] libmachine: (no-preload-731976) Ensuring network mk-no-preload-731976 is active
	I0314 19:14:53.448507  988919 main.go:141] libmachine: (no-preload-731976) Getting domain xml...
	I0314 19:14:53.449538  988919 main.go:141] libmachine: (no-preload-731976) Creating domain...
	I0314 19:14:54.832090  988919 main.go:141] libmachine: (no-preload-731976) Waiting to get IP...
	I0314 19:14:54.833175  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:14:54.833735  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:14:54.833761  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:54.833718  989124 retry.go:31] will retry after 279.891568ms: waiting for machine to come up
	I0314 19:14:55.115512  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:14:55.116193  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:14:55.116234  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:55.116079  989124 retry.go:31] will retry after 272.572393ms: waiting for machine to come up
	I0314 19:14:55.390941  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:14:55.391570  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:14:55.391604  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:55.391518  989124 retry.go:31] will retry after 420.591205ms: waiting for machine to come up
	I0314 19:14:55.814029  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:14:55.814602  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:14:55.814637  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:55.814540  989124 retry.go:31] will retry after 595.321794ms: waiting for machine to come up
	I0314 19:14:56.411104  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:14:56.411622  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:14:56.411656  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:56.411569  989124 retry.go:31] will retry after 731.075887ms: waiting for machine to come up
	I0314 19:14:57.144445  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:14:57.144896  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:14:57.144939  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:57.144849  989124 retry.go:31] will retry after 897.626718ms: waiting for machine to come up
	I0314 19:14:53.339339  988684 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:14:53.339416  988684 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:14:53.509931  988684 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:14:53.683517  988684 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:14:54.072975  988684 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:14:54.406087  988684 docker.go:233] disabling docker service ...
	I0314 19:14:54.406189  988684 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:14:54.508501  988684 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:14:54.530663  988684 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:14:54.781742  988684 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:14:55.134842  988684 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:14:55.179617  988684 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:14:55.216862  988684 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:14:55.216981  988684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:55.234843  988684 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:14:55.234926  988684 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:55.254432  988684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:55.269775  988684 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:55.284778  988684 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:14:55.299022  988684 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:14:55.311737  988684 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:14:55.325247  988684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:14:55.532800  988684 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:14:55.127662  988436 out.go:204]   - Booting up control plane ...
	I0314 19:14:55.127792  988436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:14:55.137884  988436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:14:55.139279  988436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:14:55.140805  988436 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:14:55.148289  988436 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:14:58.044259  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:14:58.044733  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:14:58.044770  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:58.044681  989124 retry.go:31] will retry after 915.759756ms: waiting for machine to come up
	I0314 19:14:58.962494  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:14:58.963029  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:14:58.963055  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:14:58.962972  989124 retry.go:31] will retry after 1.210225937s: waiting for machine to come up
	I0314 19:15:00.175260  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:15:00.175709  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:15:00.175742  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:15:00.175658  989124 retry.go:31] will retry after 1.36834609s: waiting for machine to come up
	I0314 19:15:01.546345  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:15:01.546843  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:15:01.546866  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:15:01.546785  989124 retry.go:31] will retry after 1.605576775s: waiting for machine to come up
	I0314 19:15:06.044096  988684 ssh_runner.go:235] Completed: sudo systemctl restart crio: (10.511251043s)
	I0314 19:15:06.044126  988684 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:15:06.044171  988684 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:15:06.051398  988684 start.go:562] Will wait 60s for crictl version
	I0314 19:15:06.051464  988684 ssh_runner.go:195] Run: which crictl
	I0314 19:15:06.056535  988684 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:15:06.106234  988684 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:15:06.106339  988684 ssh_runner.go:195] Run: crio --version
	I0314 19:15:06.146851  988684 ssh_runner.go:195] Run: crio --version
	I0314 19:15:06.189558  988684 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 19:15:03.154357  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:15:03.154870  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:15:03.154902  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:15:03.154813  989124 retry.go:31] will retry after 2.565617877s: waiting for machine to come up
	I0314 19:15:05.721681  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:15:05.722350  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:15:05.722374  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:15:05.722302  989124 retry.go:31] will retry after 2.676476789s: waiting for machine to come up
	I0314 19:15:06.191084  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) Calling .GetIP
	I0314 19:15:06.194842  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:15:06.195317  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:3b:7f:e0", ip: ""} in network mk-kubernetes-upgrade-097195: {Iface:virbr2 ExpiryTime:2024-03-14 20:13:44 +0000 UTC Type:0 Mac:52:54:00:3b:7f:e0 Iaid: IPaddr:192.168.50.124 Prefix:24 Hostname:kubernetes-upgrade-097195 Clientid:01:52:54:00:3b:7f:e0}
	I0314 19:15:06.195390  988684 main.go:141] libmachine: (kubernetes-upgrade-097195) DBG | domain kubernetes-upgrade-097195 has defined IP address 192.168.50.124 and MAC address 52:54:00:3b:7f:e0 in network mk-kubernetes-upgrade-097195
	I0314 19:15:06.195743  988684 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 19:15:06.225888  988684 kubeadm.go:877] updating cluster {Name:kubernetes-upgrade-097195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-097195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.124 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPor
t:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:15:06.226059  988684 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 19:15:06.226134  988684 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:15:06.599919  988684 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:15:06.599957  988684 crio.go:415] Images already preloaded, skipping extraction
	I0314 19:15:06.600021  988684 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:15:06.948752  988684 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:15:06.948785  988684 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:15:06.948796  988684 kubeadm.go:928] updating node { 192.168.50.124 8443 v1.29.0-rc.2 crio true true} ...
	I0314 19:15:06.948935  988684 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=kubernetes-upgrade-097195 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.124
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-097195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:15:06.949015  988684 ssh_runner.go:195] Run: crio config
	I0314 19:15:07.571375  988684 cni.go:84] Creating CNI manager for ""
	I0314 19:15:07.571403  988684 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:15:07.571415  988684 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:15:07.571444  988684 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.124 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:kubernetes-upgrade-097195 NodeName:kubernetes-upgrade-097195 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.124"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.124 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/cert
s/ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:15:07.571608  988684 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.124
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "kubernetes-upgrade-097195"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.124
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.124"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:15:07.571690  988684 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 19:15:07.623863  988684 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:15:07.623970  988684 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:15:07.658661  988684 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (330 bytes)
	I0314 19:15:07.706505  988684 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 19:15:07.748762  988684 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2174 bytes)
	I0314 19:15:07.785656  988684 ssh_runner.go:195] Run: grep 192.168.50.124	control-plane.minikube.internal$ /etc/hosts
	I0314 19:15:07.793111  988684 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:15:08.011748  988684 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:15:08.027893  988684 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195 for IP: 192.168.50.124
	I0314 19:15:08.027916  988684 certs.go:194] generating shared ca certs ...
	I0314 19:15:08.027939  988684 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:15:08.028097  988684 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:15:08.028142  988684 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:15:08.028152  988684 certs.go:256] generating profile certs ...
	I0314 19:15:08.028269  988684 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/client.key
	I0314 19:15:08.028316  988684 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.key.51bc1de3
	I0314 19:15:08.028362  988684 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/proxy-client.key
	I0314 19:15:08.028503  988684 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:15:08.028540  988684 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:15:08.028554  988684 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:15:08.028587  988684 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:15:08.028617  988684 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:15:08.028649  988684 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:15:08.028698  988684 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:15:08.029437  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:15:08.067811  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:15:08.110046  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:15:08.400945  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:15:08.401551  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:15:08.401582  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:15:08.401490  989124 retry.go:31] will retry after 3.646157276s: waiting for machine to come up
	I0314 19:15:12.049242  988919 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:15:12.049755  988919 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:15:12.049777  988919 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:15:12.049715  989124 retry.go:31] will retry after 4.271156252s: waiting for machine to come up
	I0314 19:15:08.153346  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:15:08.222390  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0314 19:15:08.261362  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:15:08.296825  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:15:08.331629  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/kubernetes-upgrade-097195/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:15:08.382541  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:15:08.411539  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:15:08.445681  988684 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:15:08.470599  988684 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:15:08.494394  988684 ssh_runner.go:195] Run: openssl version
	I0314 19:15:08.504782  988684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:15:08.523733  988684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:15:08.529440  988684 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:15:08.529494  988684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:15:08.558113  988684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:15:08.605020  988684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:15:08.619869  988684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:15:08.625683  988684 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:15:08.625740  988684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:15:08.633046  988684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:15:08.645993  988684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:15:08.659653  988684 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:15:08.664850  988684 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:15:08.664901  988684 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:15:08.671366  988684 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:15:08.683487  988684 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:15:08.688833  988684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:15:08.697409  988684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:15:08.704805  988684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:15:08.711087  988684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:15:08.717529  988684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:15:08.725711  988684 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:15:08.732142  988684 kubeadm.go:391] StartCluster: {Name:kubernetes-upgrade-097195 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.29.0-rc.2 ClusterName:kubernetes-upgrade-097195 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.124 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0
MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:15:08.732240  988684 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:15:08.732311  988684 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:15:08.806759  988684 cri.go:89] found id: "cb2637f9b9729376dfe4453004663fcd092011983641e34eff8124f4ba17fafd"
	I0314 19:15:08.806790  988684 cri.go:89] found id: "5407193dadf65cf45c0db9b89d0dfa47ce7096e01bfe800142c1ed051cdc1cc1"
	I0314 19:15:08.806795  988684 cri.go:89] found id: "6386fa1010245138419b18d1ff647e7c6bf8755818b4091a9b4f81be9db992a9"
	I0314 19:15:08.806800  988684 cri.go:89] found id: "6d573be387a641787df9e531b4add187039a0e30f55597e819e54146f9d18d69"
	I0314 19:15:08.806804  988684 cri.go:89] found id: "a8a5ba446468a0fb69ed3e176187f480b61710c02418bb4f9c3f05f69fd8f7d3"
	I0314 19:15:08.806808  988684 cri.go:89] found id: "bd9ef249bea1f37cc70a4d8288f34e19500abb9e38a218d92e2608791fb7260d"
	I0314 19:15:08.806811  988684 cri.go:89] found id: "e75d4116262ebae897e34f95a9b9059d9d1bf5f1403c7180a1a5c238cfdd94dd"
	I0314 19:15:08.806813  988684 cri.go:89] found id: "24f500e1a28ac9a41ad3157eaa80188c8323250e7c022d4390a4418160d97488"
	I0314 19:15:08.806823  988684 cri.go:89] found id: "f4b0a5ad431719f2576d1fddf1854c77b199f82f7e7db55ed75117c66922e719"
	I0314 19:15:08.806829  988684 cri.go:89] found id: "3c97bdb17b4bf8e17fa137064180b80bfd3f08fc7c4972fea71fe5d9559c8aa9"
	I0314 19:15:08.806832  988684 cri.go:89] found id: "d019c86c69bfd42aeeb29d110482147283c27eb5c9910f731eb3a127561bbf31"
	I0314 19:15:08.806834  988684 cri.go:89] found id: "58e07a183556e42cb08afcebaa5a64a7048e39660613de46111a7f19d2599c9f"
	I0314 19:15:08.806838  988684 cri.go:89] found id: "ce185b574a8d3be92f75eb9e705f32207ceacb2fff1b7c89b6e5cbb112f6382d"
	I0314 19:15:08.806841  988684 cri.go:89] found id: "3859de9082482a5084d008eed747861c3910299bbc17d7b93aca323e20295848"
	I0314 19:15:08.806849  988684 cri.go:89] found id: ""
	I0314 19:15:08.806891  988684 ssh_runner.go:195] Run: sudo runc list -f json
	
	
	==> CRI-O <==
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.325438971Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=81364f9f-de31-4449-bb95-d9ce85dd9bb6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.325518130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=81364f9f-de31-4449-bb95-d9ce85dd9bb6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.325714954Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3acff996cea6cfc7826c2cfca1b826e0236ef8dd8a1f3fe3dc6e6d67546c51,PodSandboxId:2a9854eae5d1b82867ac2a2751c7c9b17b3b38abbe2e853622bb35309d36809e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710443726604965376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8l5s8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0d6300-566d-4eac-9fe2-05cebc5935c8,},Annotations:map[string]string{io.kubernetes.container.hash: a378d250,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8221aedfb55245e630f3cb2af5369ff62af344dc3f6a132488bcd86d69950dbb,PodSandboxId:3bec24b905b9c13d4813c11bd6631c3aaabac5424e6491aef019d36e2b74c68b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710443726624988558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4x6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316a0847-3873-4cb4-aed8-4c2f1c4a15bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2962e8d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8152002516f22df513886d7424490a30ac1da384fb36c01da6dd1d72ec6c57,PodSandboxId:2f9e833956855798962d16453cba25750ff48f2830760f432e058dd795990aef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710443726636928232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mk7kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c66e2-dc9e-4
c04-90f3-e183d2297cfe,},Annotations:map[string]string{io.kubernetes.container.hash: 84e83d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aa81d966614f154ddfab2b73bf6eb5011958d54b230299a3706a249a2de01d0,PodSandboxId:868e5e7e76acc189df513de5b33d7e1802c6e394f3cbf6b84c4e64371ef82eb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710
443726592356915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eb17fcf-73a7-4f6d-b451-7285dcc53df3,},Annotations:map[string]string{io.kubernetes.container.hash: 3dee5e4d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6162095f46ede76f29a487554d3f744693565c7918d1e584b31cf4c73a01411,PodSandboxId:7fa9b8a2b55284e4c28c058675b90ca039ac927602280fa2fa50914821574f04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710443722988633242,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1cba36676c6bd2a2f9340855d4ec06,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb9ea7203ecf4708bff00a385d4462f4bcec15e6cbe794d49d01bbfdf180a6,PodSandboxId:0867ba19c080f313fdb3dc730dd5902a375f9e623aa5d496dcc5870b62d421ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:17104437229784745
98,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc65170252378ea2e5c256d3be576c6,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9825c5969a3f3407a2ab964c746edb91e9e09a7ca71d02c816b6cd26db5bb3ab,PodSandboxId:877e9e790a6a2caed0f475a9e94bc85a46251d509e95eb69575e94680816572b,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:17104437229578
90052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33696b7f14d47cd2e8d3a2fb42f10b0a,},Annotations:map[string]string{io.kubernetes.container.hash: ea980421,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:266118d1e2ae0a6b0fe79eb4e48da42478835fa3e494ef2894a0da8d2779198e,PodSandboxId:1f65b7c75114bce8ed808e137fe3e3bd226ab62ed3ada75164faa9d45906133a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710443722967047219,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5e5da955f442ea93effbf0c48c93c3,},Annotations:map[string]string{io.kubernetes.container.hash: ba199de0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=81364f9f-de31-4449-bb95-d9ce85dd9bb6 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.347430966Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=06bd075d-0fdd-46cf-a52d-8a64799d6358 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.347544129Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=06bd075d-0fdd-46cf-a52d-8a64799d6358 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.349665242Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=66490cf8-8773-4f37-86ea-dde926d60b30 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.350399851Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710443730350365759,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=66490cf8-8773-4f37-86ea-dde926d60b30 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.351201010Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b620975a-6756-4ab2-b60c-f889a9656418 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.351294226Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b620975a-6756-4ab2-b60c-f889a9656418 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.353616516Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3acff996cea6cfc7826c2cfca1b826e0236ef8dd8a1f3fe3dc6e6d67546c51,PodSandboxId:2a9854eae5d1b82867ac2a2751c7c9b17b3b38abbe2e853622bb35309d36809e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710443726604965376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8l5s8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0d6300-566d-4eac-9fe2-05cebc5935c8,},Annotations:map[string]string{io.kubernetes.container.hash: a378d250,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8221aedfb55245e630f3cb2af5369ff62af344dc3f6a132488bcd86d69950dbb,PodSandboxId:3bec24b905b9c13d4813c11bd6631c3aaabac5424e6491aef019d36e2b74c68b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710443726624988558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4x6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316a0847-3873-4cb4-aed8-4c2f1c4a15bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2962e8d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8152002516f22df513886d7424490a30ac1da384fb36c01da6dd1d72ec6c57,PodSandboxId:2f9e833956855798962d16453cba25750ff48f2830760f432e058dd795990aef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710443726636928232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mk7kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c66e2-dc9e-4
c04-90f3-e183d2297cfe,},Annotations:map[string]string{io.kubernetes.container.hash: 84e83d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aa81d966614f154ddfab2b73bf6eb5011958d54b230299a3706a249a2de01d0,PodSandboxId:868e5e7e76acc189df513de5b33d7e1802c6e394f3cbf6b84c4e64371ef82eb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710
443726592356915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eb17fcf-73a7-4f6d-b451-7285dcc53df3,},Annotations:map[string]string{io.kubernetes.container.hash: 3dee5e4d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6162095f46ede76f29a487554d3f744693565c7918d1e584b31cf4c73a01411,PodSandboxId:7fa9b8a2b55284e4c28c058675b90ca039ac927602280fa2fa50914821574f04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710443722988633242,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1cba36676c6bd2a2f9340855d4ec06,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb9ea7203ecf4708bff00a385d4462f4bcec15e6cbe794d49d01bbfdf180a6,PodSandboxId:0867ba19c080f313fdb3dc730dd5902a375f9e623aa5d496dcc5870b62d421ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:17104437229784745
98,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc65170252378ea2e5c256d3be576c6,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9825c5969a3f3407a2ab964c746edb91e9e09a7ca71d02c816b6cd26db5bb3ab,PodSandboxId:877e9e790a6a2caed0f475a9e94bc85a46251d509e95eb69575e94680816572b,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:17104437229578
90052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33696b7f14d47cd2e8d3a2fb42f10b0a,},Annotations:map[string]string{io.kubernetes.container.hash: ea980421,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:266118d1e2ae0a6b0fe79eb4e48da42478835fa3e494ef2894a0da8d2779198e,PodSandboxId:1f65b7c75114bce8ed808e137fe3e3bd226ab62ed3ada75164faa9d45906133a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710443722967047219,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5e5da955f442ea93effbf0c48c93c3,},Annotations:map[string]string{io.kubernetes.container.hash: ba199de0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5407193dadf65cf45c0db9b89d0dfa47ce7096e01bfe800142c1ed051cdc1cc1,PodSandboxId:2a9854eae5d1b82867ac2a2751c7c9b17b3b38abbe2e853622bb35309d36809e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710443707050319631,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8l5s8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0d6300-566d-4eac-9fe2-05cebc5935c8,},Annotations:map[string]string{io.kubernetes.container.hash: a378d250,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2637f9b9729376dfe4453004663fcd092011983641e34eff8124f4ba17fafd,PodSandboxId:868e5e7e76acc189df513de5b33d7e1802c6e394f3cbf6b84c4e64371ef82eb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710443707150918667,Labels:map[string]string{io.kubernetes.container.name:
storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eb17fcf-73a7-4f6d-b451-7285dcc53df3,},Annotations:map[string]string{io.kubernetes.container.hash: 3dee5e4d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6386fa1010245138419b18d1ff647e7c6bf8755818b4091a9b4f81be9db992a9,PodSandboxId:1f65b7c75114bce8ed808e137fe3e3bd226ab62ed3ada75164faa9d45906133a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710443707043607864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5e5da955f442ea93effbf0c48c93c3,},Annotations:map[string]string{io.kubernetes.container.hash: ba199de0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d573be387a641787df9e531b4add187039a0e30f55597e819e54146f9d18d69,PodSandboxId:877e9e790a6a2caed0f475a9e94bc85a46251d509e95eb69575e94680816572b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710443706951726293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33696b7f14d47cd2e8d3a2fb42f10b0a,},Annotations:map[string]string{io.kubernetes.container.hash: ea980421,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a5ba446468a0fb69ed3e176187f480b61710c02418bb4f9c3f05f69fd8f7d3,PodSandboxId:0867ba19c080f313fdb3dc730dd5902a375f9e623aa5d496dcc5870b62d421ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710443706934825945,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: ku
be-controller-manager-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc65170252378ea2e5c256d3be576c6,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9ef249bea1f37cc70a4d8288f34e19500abb9e38a218d92e2608791fb7260d,PodSandboxId:7fa9b8a2b55284e4c28c058675b90ca039ac927602280fa2fa50914821574f04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710443706871370800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1cba36676c6bd2a2f9340855d4ec06,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75d4116262ebae897e34f95a9b9059d9d1bf5f1403c7180a1a5c238cfdd94dd,PodSandboxId:14a14fae6c6082afeee43370e54fd5d8706e5fff8355bf35e0d8a84c400203e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710443694339903825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-
4x6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316a0847-3873-4cb4-aed8-4c2f1c4a15bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2962e8d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f500e1a28ac9a41ad3157eaa80188c8323250e7c022d4390a4418160d97488,PodSandboxId:abc505950e3bd55b5141c1b484d77453fabbd31467ddc7ece8b83a13ad01bed6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710443694324731481,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mk7kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c66e2-dc9e-4c04-90f3-e183d2297cfe,},Annotations:map[string]string{io.kubernetes.container.hash: 84e83d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b620975a-6756-4ab2-b60c-f889a9656418 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.416990260Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=6d7758e2-9483-4751-a909-e8e08aa4bd17 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.417089262Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=6d7758e2-9483-4751-a909-e8e08aa4bd17 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.418503350Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b0b14ac6-aaec-47bd-bfd1-7b833cbff09c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.418876627Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710443730418855357,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b0b14ac6-aaec-47bd-bfd1-7b833cbff09c name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.419637105Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e765337e-e768-470e-8e7d-42b7a9592de3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.419691675Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e765337e-e768-470e-8e7d-42b7a9592de3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.420300579Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3acff996cea6cfc7826c2cfca1b826e0236ef8dd8a1f3fe3dc6e6d67546c51,PodSandboxId:2a9854eae5d1b82867ac2a2751c7c9b17b3b38abbe2e853622bb35309d36809e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710443726604965376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8l5s8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0d6300-566d-4eac-9fe2-05cebc5935c8,},Annotations:map[string]string{io.kubernetes.container.hash: a378d250,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8221aedfb55245e630f3cb2af5369ff62af344dc3f6a132488bcd86d69950dbb,PodSandboxId:3bec24b905b9c13d4813c11bd6631c3aaabac5424e6491aef019d36e2b74c68b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710443726624988558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4x6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316a0847-3873-4cb4-aed8-4c2f1c4a15bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2962e8d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8152002516f22df513886d7424490a30ac1da384fb36c01da6dd1d72ec6c57,PodSandboxId:2f9e833956855798962d16453cba25750ff48f2830760f432e058dd795990aef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710443726636928232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mk7kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c66e2-dc9e-4
c04-90f3-e183d2297cfe,},Annotations:map[string]string{io.kubernetes.container.hash: 84e83d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aa81d966614f154ddfab2b73bf6eb5011958d54b230299a3706a249a2de01d0,PodSandboxId:868e5e7e76acc189df513de5b33d7e1802c6e394f3cbf6b84c4e64371ef82eb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710
443726592356915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eb17fcf-73a7-4f6d-b451-7285dcc53df3,},Annotations:map[string]string{io.kubernetes.container.hash: 3dee5e4d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6162095f46ede76f29a487554d3f744693565c7918d1e584b31cf4c73a01411,PodSandboxId:7fa9b8a2b55284e4c28c058675b90ca039ac927602280fa2fa50914821574f04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710443722988633242,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1cba36676c6bd2a2f9340855d4ec06,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb9ea7203ecf4708bff00a385d4462f4bcec15e6cbe794d49d01bbfdf180a6,PodSandboxId:0867ba19c080f313fdb3dc730dd5902a375f9e623aa5d496dcc5870b62d421ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:17104437229784745
98,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc65170252378ea2e5c256d3be576c6,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9825c5969a3f3407a2ab964c746edb91e9e09a7ca71d02c816b6cd26db5bb3ab,PodSandboxId:877e9e790a6a2caed0f475a9e94bc85a46251d509e95eb69575e94680816572b,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:17104437229578
90052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33696b7f14d47cd2e8d3a2fb42f10b0a,},Annotations:map[string]string{io.kubernetes.container.hash: ea980421,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:266118d1e2ae0a6b0fe79eb4e48da42478835fa3e494ef2894a0da8d2779198e,PodSandboxId:1f65b7c75114bce8ed808e137fe3e3bd226ab62ed3ada75164faa9d45906133a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710443722967047219,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5e5da955f442ea93effbf0c48c93c3,},Annotations:map[string]string{io.kubernetes.container.hash: ba199de0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5407193dadf65cf45c0db9b89d0dfa47ce7096e01bfe800142c1ed051cdc1cc1,PodSandboxId:2a9854eae5d1b82867ac2a2751c7c9b17b3b38abbe2e853622bb35309d36809e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710443707050319631,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8l5s8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0d6300-566d-4eac-9fe2-05cebc5935c8,},Annotations:map[string]string{io.kubernetes.container.hash: a378d250,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2637f9b9729376dfe4453004663fcd092011983641e34eff8124f4ba17fafd,PodSandboxId:868e5e7e76acc189df513de5b33d7e1802c6e394f3cbf6b84c4e64371ef82eb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710443707150918667,Labels:map[string]string{io.kubernetes.container.name:
storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eb17fcf-73a7-4f6d-b451-7285dcc53df3,},Annotations:map[string]string{io.kubernetes.container.hash: 3dee5e4d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6386fa1010245138419b18d1ff647e7c6bf8755818b4091a9b4f81be9db992a9,PodSandboxId:1f65b7c75114bce8ed808e137fe3e3bd226ab62ed3ada75164faa9d45906133a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710443707043607864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5e5da955f442ea93effbf0c48c93c3,},Annotations:map[string]string{io.kubernetes.container.hash: ba199de0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d573be387a641787df9e531b4add187039a0e30f55597e819e54146f9d18d69,PodSandboxId:877e9e790a6a2caed0f475a9e94bc85a46251d509e95eb69575e94680816572b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710443706951726293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33696b7f14d47cd2e8d3a2fb42f10b0a,},Annotations:map[string]string{io.kubernetes.container.hash: ea980421,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a5ba446468a0fb69ed3e176187f480b61710c02418bb4f9c3f05f69fd8f7d3,PodSandboxId:0867ba19c080f313fdb3dc730dd5902a375f9e623aa5d496dcc5870b62d421ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710443706934825945,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: ku
be-controller-manager-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc65170252378ea2e5c256d3be576c6,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9ef249bea1f37cc70a4d8288f34e19500abb9e38a218d92e2608791fb7260d,PodSandboxId:7fa9b8a2b55284e4c28c058675b90ca039ac927602280fa2fa50914821574f04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710443706871370800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1cba36676c6bd2a2f9340855d4ec06,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75d4116262ebae897e34f95a9b9059d9d1bf5f1403c7180a1a5c238cfdd94dd,PodSandboxId:14a14fae6c6082afeee43370e54fd5d8706e5fff8355bf35e0d8a84c400203e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710443694339903825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-
4x6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316a0847-3873-4cb4-aed8-4c2f1c4a15bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2962e8d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f500e1a28ac9a41ad3157eaa80188c8323250e7c022d4390a4418160d97488,PodSandboxId:abc505950e3bd55b5141c1b484d77453fabbd31467ddc7ece8b83a13ad01bed6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710443694324731481,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mk7kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c66e2-dc9e-4c04-90f3-e183d2297cfe,},Annotations:map[string]string{io.kubernetes.container.hash: 84e83d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=e765337e-e768-470e-8e7d-42b7a9592de3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.467005054Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b31abaad-5bd9-4df8-98d8-658383cb7827 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.467104661Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b31abaad-5bd9-4df8-98d8-658383cb7827 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.470274398Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=bb9ee5f5-af49-4691-9974-b89e70450c88 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.470771298Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710443730470737920,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:121256,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=bb9ee5f5-af49-4691-9974-b89e70450c88 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.471651575Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=11c1d29f-cbd0-4273-a102-c7c1fdbed576 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.471907215Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=11c1d29f-cbd0-4273-a102-c7c1fdbed576 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.473779266Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:ef3acff996cea6cfc7826c2cfca1b826e0236ef8dd8a1f3fe3dc6e6d67546c51,PodSandboxId:2a9854eae5d1b82867ac2a2751c7c9b17b3b38abbe2e853622bb35309d36809e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:3,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710443726604965376,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8l5s8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0d6300-566d-4eac-9fe2-05cebc5935c8,},Annotations:map[string]string{io.kubernetes.container.hash: a378d250,io.kubernetes.container.restartCount: 3,io.kubernetes.container.termin
ationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8221aedfb55245e630f3cb2af5369ff62af344dc3f6a132488bcd86d69950dbb,PodSandboxId:3bec24b905b9c13d4813c11bd6631c3aaabac5424e6491aef019d36e2b74c68b,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710443726624988558,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-4x6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316a0847-3873-4cb4-aed8-4c2f1c4a15bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2962e8d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\"
:\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:dc8152002516f22df513886d7424490a30ac1da384fb36c01da6dd1d72ec6c57,PodSandboxId:2f9e833956855798962d16453cba25750ff48f2830760f432e058dd795990aef,Metadata:&ContainerMetadata{Name:coredns,Attempt:2,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710443726636928232,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mk7kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c66e2-dc9e-4
c04-90f3-e183d2297cfe,},Annotations:map[string]string{io.kubernetes.container.hash: 84e83d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aa81d966614f154ddfab2b73bf6eb5011958d54b230299a3706a249a2de01d0,PodSandboxId:868e5e7e76acc189df513de5b33d7e1802c6e394f3cbf6b84c4e64371ef82eb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:3,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710
443726592356915,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eb17fcf-73a7-4f6d-b451-7285dcc53df3,},Annotations:map[string]string{io.kubernetes.container.hash: 3dee5e4d,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b6162095f46ede76f29a487554d3f744693565c7918d1e584b31cf4c73a01411,PodSandboxId:7fa9b8a2b55284e4c28c058675b90ca039ac927602280fa2fa50914821574f04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:3,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710443722988633242,L
abels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1cba36676c6bd2a2f9340855d4ec06,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:79fb9ea7203ecf4708bff00a385d4462f4bcec15e6cbe794d49d01bbfdf180a6,PodSandboxId:0867ba19c080f313fdb3dc730dd5902a375f9e623aa5d496dcc5870b62d421ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:3,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:17104437229784745
98,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc65170252378ea2e5c256d3be576c6,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9825c5969a3f3407a2ab964c746edb91e9e09a7ca71d02c816b6cd26db5bb3ab,PodSandboxId:877e9e790a6a2caed0f475a9e94bc85a46251d509e95eb69575e94680816572b,Metadata:&ContainerMetadata{Name:etcd,Attempt:3,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:17104437229578
90052,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33696b7f14d47cd2e8d3a2fb42f10b0a,},Annotations:map[string]string{io.kubernetes.container.hash: ea980421,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:266118d1e2ae0a6b0fe79eb4e48da42478835fa3e494ef2894a0da8d2779198e,PodSandboxId:1f65b7c75114bce8ed808e137fe3e3bd226ab62ed3ada75164faa9d45906133a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:3,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710443722967047219,Labels:map[string]s
tring{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5e5da955f442ea93effbf0c48c93c3,},Annotations:map[string]string{io.kubernetes.container.hash: ba199de0,io.kubernetes.container.restartCount: 3,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5407193dadf65cf45c0db9b89d0dfa47ce7096e01bfe800142c1ed051cdc1cc1,PodSandboxId:2a9854eae5d1b82867ac2a2751c7c9b17b3b38abbe2e853622bb35309d36809e,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:2,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_EXITED,CreatedAt:1710443707050319631,Labels:map[string]string{io.k
ubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-8l5s8,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7f0d6300-566d-4eac-9fe2-05cebc5935c8,},Annotations:map[string]string{io.kubernetes.container.hash: a378d250,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:cb2637f9b9729376dfe4453004663fcd092011983641e34eff8124f4ba17fafd,PodSandboxId:868e5e7e76acc189df513de5b33d7e1802c6e394f3cbf6b84c4e64371ef82eb6,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710443707150918667,Labels:map[string]string{io.kubernetes.container.name:
storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4eb17fcf-73a7-4f6d-b451-7285dcc53df3,},Annotations:map[string]string{io.kubernetes.container.hash: 3dee5e4d,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6386fa1010245138419b18d1ff647e7c6bf8755818b4091a9b4f81be9db992a9,PodSandboxId:1f65b7c75114bce8ed808e137fe3e3bd226ab62ed3ada75164faa9d45906133a,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_EXITED,CreatedAt:1710443707043607864,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io
.kubernetes.pod.name: kube-apiserver-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: dd5e5da955f442ea93effbf0c48c93c3,},Annotations:map[string]string{io.kubernetes.container.hash: ba199de0,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6d573be387a641787df9e531b4add187039a0e30f55597e819e54146f9d18d69,PodSandboxId:877e9e790a6a2caed0f475a9e94bc85a46251d509e95eb69575e94680816572b,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_EXITED,CreatedAt:1710443706951726293,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd
-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 33696b7f14d47cd2e8d3a2fb42f10b0a,},Annotations:map[string]string{io.kubernetes.container.hash: ea980421,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a8a5ba446468a0fb69ed3e176187f480b61710c02418bb4f9c3f05f69fd8f7d3,PodSandboxId:0867ba19c080f313fdb3dc730dd5902a375f9e623aa5d496dcc5870b62d421ae,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_EXITED,CreatedAt:1710443706934825945,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: ku
be-controller-manager-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ddc65170252378ea2e5c256d3be576c6,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:bd9ef249bea1f37cc70a4d8288f34e19500abb9e38a218d92e2608791fb7260d,PodSandboxId:7fa9b8a2b55284e4c28c058675b90ca039ac927602280fa2fa50914821574f04,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_EXITED,CreatedAt:1710443706871370800,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name:
kube-scheduler-kubernetes-upgrade-097195,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9c1cba36676c6bd2a2f9340855d4ec06,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e75d4116262ebae897e34f95a9b9059d9d1bf5f1403c7180a1a5c238cfdd94dd,PodSandboxId:14a14fae6c6082afeee43370e54fd5d8706e5fff8355bf35e0d8a84c400203e7,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710443694339903825,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-
4x6mw,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 316a0847-3873-4cb4-aed8-4c2f1c4a15bf,},Annotations:map[string]string{io.kubernetes.container.hash: 2962e8d,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:24f500e1a28ac9a41ad3157eaa80188c8323250e7c022d4390a4418160d97488,PodSandboxId:abc505950e3bd55b5141c1b484d77453fabbd31467ddc7ece8b83a13ad01bed6,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a6
74fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_EXITED,CreatedAt:1710443694324731481,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mk7kd,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ad6c66e2-dc9e-4c04-90f3-e183d2297cfe,},Annotations:map[string]string{io.kubernetes.container.hash: 84e83d12,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=11c1d29f-cbd0-4273-a102-c7c1fdbed576 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:15:30 kubernetes-upgrade-097195 crio[2752]: time="2024-03-14 19:15:30.518469787Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=6c279845-6251-4dee-9d0a-c388707e12f1 name=/runtime.v1.RuntimeService/ListPodSandbox
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	dc8152002516f       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   2f9e833956855       coredns-76f75df574-mk7kd
	8221aedfb5524       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   3 seconds ago       Running             coredns                   2                   3bec24b905b9c       coredns-76f75df574-4x6mw
	ef3acff996cea       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   3 seconds ago       Running             kube-proxy                3                   2a9854eae5d1b       kube-proxy-8l5s8
	7aa81d966614f       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   4 seconds ago       Running             storage-provisioner       3                   868e5e7e76acc       storage-provisioner
	b6162095f46ed       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   7 seconds ago       Running             kube-scheduler            3                   7fa9b8a2b5528       kube-scheduler-kubernetes-upgrade-097195
	79fb9ea7203ec       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   7 seconds ago       Running             kube-controller-manager   3                   0867ba19c080f       kube-controller-manager-kubernetes-upgrade-097195
	266118d1e2ae0       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   7 seconds ago       Running             kube-apiserver            3                   1f65b7c75114b       kube-apiserver-kubernetes-upgrade-097195
	9825c5969a3f3       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   7 seconds ago       Running             etcd                      3                   877e9e790a6a2       etcd-kubernetes-upgrade-097195
	cb2637f9b9729       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   23 seconds ago      Exited              storage-provisioner       2                   868e5e7e76acc       storage-provisioner
	5407193dadf65       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834   23 seconds ago      Exited              kube-proxy                2                   2a9854eae5d1b       kube-proxy-8l5s8
	6386fa1010245       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f   23 seconds ago      Exited              kube-apiserver            2                   1f65b7c75114b       kube-apiserver-kubernetes-upgrade-097195
	6d573be387a64       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7   23 seconds ago      Exited              etcd                      2                   877e9e790a6a2       etcd-kubernetes-upgrade-097195
	a8a5ba446468a       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d   23 seconds ago      Exited              kube-controller-manager   2                   0867ba19c080f       kube-controller-manager-kubernetes-upgrade-097195
	bd9ef249bea1f       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210   23 seconds ago      Exited              kube-scheduler            2                   7fa9b8a2b5528       kube-scheduler-kubernetes-upgrade-097195
	e75d4116262eb       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago      Exited              coredns                   1                   14a14fae6c608       coredns-76f75df574-4x6mw
	24f500e1a28ac       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4   36 seconds ago      Exited              coredns                   1                   abc505950e3bd       coredns-76f75df574-mk7kd
	
	
	==> coredns [24f500e1a28ac9a41ad3157eaa80188c8323250e7c022d4390a4418160d97488] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> coredns [8221aedfb55245e630f3cb2af5369ff62af344dc3f6a132488bcd86d69950dbb] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [dc8152002516f22df513886d7424490a30ac1da384fb36c01da6dd1d72ec6c57] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	
	
	==> coredns [e75d4116262ebae897e34f95a9b9059d9d1bf5f1403c7180a1a5c238cfdd94dd] <==
	[INFO] SIGTERM: Shutting down servers then terminating
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[INFO] plugin/kubernetes: waiting for Kubernetes API before starting server
	[WARNING] plugin/kubernetes: starting server with unsynced Kubernetes API
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] plugin/health: Going into lameduck mode for 5s
	
	
	==> describe nodes <==
	Name:               kubernetes-upgrade-097195
	Roles:              <none>
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=kubernetes-upgrade-097195
	                    kubernetes.io/os=linux
	Annotations:        node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:14:06 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  kubernetes-upgrade-097195
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:15:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:15:26 +0000   Thu, 14 Mar 2024 19:14:04 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:15:26 +0000   Thu, 14 Mar 2024 19:14:04 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:15:26 +0000   Thu, 14 Mar 2024 19:14:04 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:15:26 +0000   Thu, 14 Mar 2024 19:14:09 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.124
	  Hostname:    kubernetes-upgrade-097195
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 50c23dc80d6d4497aa668196d1c251c2
	  System UUID:                50c23dc8-0d6d-4497-aa66-8196d1c251c2
	  Boot ID:                    771b3a8f-e4cc-4c12-9cae-3c74b8382058
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (8 in total)
	  Namespace                   Name                                                 CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                 ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-76f75df574-4x6mw                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     69s
	  kube-system                 coredns-76f75df574-mk7kd                             100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     69s
	  kube-system                 etcd-kubernetes-upgrade-097195                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         75s
	  kube-system                 kube-apiserver-kubernetes-upgrade-097195             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         83s
	  kube-system                 kube-controller-manager-kubernetes-upgrade-097195    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         80s
	  kube-system                 kube-proxy-8l5s8                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         69s
	  kube-system                 kube-scheduler-kubernetes-upgrade-097195             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         78s
	  kube-system                 storage-provisioner                                  0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         79s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             240Mi (11%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 67s                kube-proxy       
	  Normal  Starting                 3s                 kube-proxy       
	  Normal  Starting                 88s                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  88s (x8 over 88s)  kubelet          Node kubernetes-upgrade-097195 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    88s (x8 over 88s)  kubelet          Node kubernetes-upgrade-097195 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     88s (x7 over 88s)  kubelet          Node kubernetes-upgrade-097195 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  88s                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           69s                node-controller  Node kubernetes-upgrade-097195 event: Registered Node kubernetes-upgrade-097195 in Controller
	
	
	==> dmesg <==
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +9.402053] systemd-fstab-generator[570]: Ignoring "noauto" option for root device
	[  +0.069166] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066813] systemd-fstab-generator[582]: Ignoring "noauto" option for root device
	[  +0.204489] systemd-fstab-generator[596]: Ignoring "noauto" option for root device
	[  +0.142842] systemd-fstab-generator[608]: Ignoring "noauto" option for root device
	[  +0.281019] systemd-fstab-generator[634]: Ignoring "noauto" option for root device
	[  +5.296383] systemd-fstab-generator[733]: Ignoring "noauto" option for root device
	[  +0.061561] kauditd_printk_skb: 130 callbacks suppressed
	[Mar14 19:14] systemd-fstab-generator[856]: Ignoring "noauto" option for root device
	[  +8.891812] systemd-fstab-generator[1240]: Ignoring "noauto" option for root device
	[  +0.072471] kauditd_printk_skb: 97 callbacks suppressed
	[ +10.838470] kauditd_printk_skb: 21 callbacks suppressed
	[ +29.753510] kauditd_printk_skb: 68 callbacks suppressed
	[  +1.100814] systemd-fstab-generator[2424]: Ignoring "noauto" option for root device
	[  +0.322395] systemd-fstab-generator[2526]: Ignoring "noauto" option for root device
	[  +0.381370] systemd-fstab-generator[2605]: Ignoring "noauto" option for root device
	[  +0.344286] systemd-fstab-generator[2666]: Ignoring "noauto" option for root device
	[  +0.448183] systemd-fstab-generator[2693]: Ignoring "noauto" option for root device
	[Mar14 19:15] kauditd_printk_skb: 198 callbacks suppressed
	[  +1.597601] systemd-fstab-generator[3582]: Ignoring "noauto" option for root device
	[ +14.114120] systemd-fstab-generator[3869]: Ignoring "noauto" option for root device
	[  +0.095216] kauditd_printk_skb: 110 callbacks suppressed
	[  +5.375167] kauditd_printk_skb: 70 callbacks suppressed
	[  +0.854305] systemd-fstab-generator[4448]: Ignoring "noauto" option for root device
	
	
	==> etcd [6d573be387a641787df9e531b4add187039a0e30f55597e819e54146f9d18d69] <==
	{"level":"info","ts":"2024-03-14T19:15:07.896531Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6b42f328fa3a6e70","initial-advertise-peer-urls":["https://192.168.50.124:2380"],"listen-peer-urls":["https://192.168.50.124:2380"],"advertise-client-urls":["https://192.168.50.124:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.124:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T19:15:09.379681Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T19:15:09.379743Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T19:15:09.379767Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 received MsgPreVoteResp from 6b42f328fa3a6e70 at term 2"}
	{"level":"info","ts":"2024-03-14T19:15:09.3798Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T19:15:09.379806Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 received MsgVoteResp from 6b42f328fa3a6e70 at term 3"}
	{"level":"info","ts":"2024-03-14T19:15:09.379821Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 became leader at term 3"}
	{"level":"info","ts":"2024-03-14T19:15:09.379827Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b42f328fa3a6e70 elected leader 6b42f328fa3a6e70 at term 3"}
	{"level":"info","ts":"2024-03-14T19:15:09.384469Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6b42f328fa3a6e70","local-member-attributes":"{Name:kubernetes-upgrade-097195 ClientURLs:[https://192.168.50.124:2379]}","request-path":"/0/members/6b42f328fa3a6e70/attributes","cluster-id":"7f90afa2e0726b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T19:15:09.384521Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:15:09.384878Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:15:09.393614Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T19:15:09.39725Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T19:15:09.397305Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T19:15:09.429993Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.124:2379"}
	{"level":"info","ts":"2024-03-14T19:15:19.830128Z","caller":"osutil/interrupt_unix.go:64","msg":"received signal; shutting down","signal":"terminated"}
	{"level":"info","ts":"2024-03-14T19:15:19.830254Z","caller":"embed/etcd.go:376","msg":"closing etcd server","name":"kubernetes-upgrade-097195","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.124:2380"],"advertise-client-urls":["https://192.168.50.124:2379"]}
	{"level":"warn","ts":"2024-03-14T19:15:19.830332Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T19:15:19.830357Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 127.0.0.1:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T19:15:19.832579Z","caller":"embed/serve.go:212","msg":"stopping secure grpc server due to error","error":"accept tcp 192.168.50.124:2379: use of closed network connection"}
	{"level":"warn","ts":"2024-03-14T19:15:19.832832Z","caller":"embed/serve.go:214","msg":"stopped secure grpc server due to error","error":"accept tcp 192.168.50.124:2379: use of closed network connection"}
	{"level":"info","ts":"2024-03-14T19:15:19.833101Z","caller":"etcdserver/server.go:1465","msg":"skipped leadership transfer for single voting member cluster","local-member-id":"6b42f328fa3a6e70","current-leader-member-id":"6b42f328fa3a6e70"}
	{"level":"info","ts":"2024-03-14T19:15:19.836942Z","caller":"embed/etcd.go:579","msg":"stopping serving peer traffic","address":"192.168.50.124:2380"}
	{"level":"info","ts":"2024-03-14T19:15:19.837081Z","caller":"embed/etcd.go:584","msg":"stopped serving peer traffic","address":"192.168.50.124:2380"}
	{"level":"info","ts":"2024-03-14T19:15:19.837095Z","caller":"embed/etcd.go:378","msg":"closed etcd server","name":"kubernetes-upgrade-097195","data-dir":"/var/lib/minikube/etcd","advertise-peer-urls":["https://192.168.50.124:2380"],"advertise-client-urls":["https://192.168.50.124:2379"]}
	
	
	==> etcd [9825c5969a3f3407a2ab964c746edb91e9e09a7ca71d02c816b6cd26db5bb3ab] <==
	{"level":"info","ts":"2024-03-14T19:15:23.64636Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T19:15:23.646399Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T19:15:23.638841Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 switched to configuration voters=(7729007267843567216)"}
	{"level":"info","ts":"2024-03-14T19:15:23.646555Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"7f90afa2e0726b","local-member-id":"6b42f328fa3a6e70","added-peer-id":"6b42f328fa3a6e70","added-peer-peer-urls":["https://192.168.50.124:2380"]}
	{"level":"info","ts":"2024-03-14T19:15:23.646687Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"7f90afa2e0726b","local-member-id":"6b42f328fa3a6e70","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:15:23.646737Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:15:23.64001Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T19:15:23.657606Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"6b42f328fa3a6e70","initial-advertise-peer-urls":["https://192.168.50.124:2380"],"listen-peer-urls":["https://192.168.50.124:2380"],"advertise-client-urls":["https://192.168.50.124:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.50.124:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T19:15:23.657811Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T19:15:23.640041Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.50.124:2380"}
	{"level":"info","ts":"2024-03-14T19:15:23.657926Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.124:2380"}
	{"level":"info","ts":"2024-03-14T19:15:24.611311Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 is starting a new election at term 3"}
	{"level":"info","ts":"2024-03-14T19:15:24.611532Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 became pre-candidate at term 3"}
	{"level":"info","ts":"2024-03-14T19:15:24.611659Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 received MsgPreVoteResp from 6b42f328fa3a6e70 at term 3"}
	{"level":"info","ts":"2024-03-14T19:15:24.611727Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 became candidate at term 4"}
	{"level":"info","ts":"2024-03-14T19:15:24.611772Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 received MsgVoteResp from 6b42f328fa3a6e70 at term 4"}
	{"level":"info","ts":"2024-03-14T19:15:24.611811Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"6b42f328fa3a6e70 became leader at term 4"}
	{"level":"info","ts":"2024-03-14T19:15:24.611849Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 6b42f328fa3a6e70 elected leader 6b42f328fa3a6e70 at term 4"}
	{"level":"info","ts":"2024-03-14T19:15:24.733118Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"6b42f328fa3a6e70","local-member-attributes":"{Name:kubernetes-upgrade-097195 ClientURLs:[https://192.168.50.124:2379]}","request-path":"/0/members/6b42f328fa3a6e70/attributes","cluster-id":"7f90afa2e0726b","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T19:15:24.733478Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:15:24.735264Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:15:24.737301Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T19:15:24.737447Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T19:15:24.740554Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.124:2379"}
	{"level":"info","ts":"2024-03-14T19:15:24.74704Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:15:31 up 1 min,  0 users,  load average: 2.01, 0.75, 0.27
	Linux kubernetes-upgrade-097195 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [266118d1e2ae0a6b0fe79eb4e48da42478835fa3e494ef2894a0da8d2779198e] <==
	I0314 19:15:26.161368       1 establishing_controller.go:76] Starting EstablishingController
	I0314 19:15:26.161401       1 nonstructuralschema_controller.go:192] Starting NonStructuralSchemaConditionController
	I0314 19:15:26.161441       1 apiapproval_controller.go:186] Starting KubernetesAPIApprovalPolicyConformantConditionController
	I0314 19:15:26.161477       1 crd_finalizer.go:266] Starting CRDFinalizer
	I0314 19:15:26.161773       1 aggregator.go:165] initial CRD sync complete...
	I0314 19:15:26.161811       1 autoregister_controller.go:141] Starting autoregister controller
	I0314 19:15:26.161833       1 cache.go:32] Waiting for caches to sync for autoregister controller
	I0314 19:15:26.184777       1 apf_controller.go:379] Running API Priority and Fairness config worker
	I0314 19:15:26.184816       1 apf_controller.go:382] Running API Priority and Fairness periodic rebalancing process
	I0314 19:15:26.186723       1 shared_informer.go:318] Caches are synced for cluster_authentication_trust_controller
	I0314 19:15:26.188096       1 cache.go:39] Caches are synced for AvailableConditionController controller
	I0314 19:15:26.188872       1 shared_informer.go:318] Caches are synced for configmaps
	I0314 19:15:26.188934       1 cache.go:39] Caches are synced for APIServiceRegistrationController controller
	I0314 19:15:26.239991       1 controller.go:624] quota admission added evaluator for: leases.coordination.k8s.io
	I0314 19:15:26.262891       1 cache.go:39] Caches are synced for autoregister controller
	I0314 19:15:26.276378       1 shared_informer.go:318] Caches are synced for node_authorizer
	I0314 19:15:27.105594       1 controller.go:624] quota admission added evaluator for: endpoints
	I0314 19:15:27.107687       1 storage_scheduling.go:111] all system priority classes are created successfully or already exist.
	W0314 19:15:27.520000       1 lease.go:265] Resetting endpoints for master service "kubernetes" to [192.168.50.124]
	I0314 19:15:27.528385       1 controller.go:624] quota admission added evaluator for: endpointslices.discovery.k8s.io
	I0314 19:15:28.035295       1 controller.go:624] quota admission added evaluator for: serviceaccounts
	I0314 19:15:28.052972       1 controller.go:624] quota admission added evaluator for: deployments.apps
	I0314 19:15:28.100599       1 controller.go:624] quota admission added evaluator for: daemonsets.apps
	I0314 19:15:28.127051       1 controller.go:624] quota admission added evaluator for: roles.rbac.authorization.k8s.io
	I0314 19:15:28.137488       1 controller.go:624] quota admission added evaluator for: rolebindings.rbac.authorization.k8s.io
	
	
	==> kube-apiserver [6386fa1010245138419b18d1ff647e7c6bf8755818b4091a9b4f81be9db992a9] <==
	I0314 19:15:11.084069       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0314 19:15:11.084104       1 dynamic_cafile_content.go:171] "Shutting down controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:15:11.085363       1 dynamic_serving_content.go:146] "Shutting down controller" name="serving-cert::/var/lib/minikube/certs/apiserver.crt::/var/lib/minikube/certs/apiserver.key"
	I0314 19:15:11.121760       1 controller.go:159] Shutting down quota evaluator
	I0314 19:15:11.121909       1 controller.go:178] quota evaluator worker shutdown
	I0314 19:15:11.122339       1 controller.go:178] quota evaluator worker shutdown
	I0314 19:15:11.122494       1 controller.go:178] quota evaluator worker shutdown
	I0314 19:15:11.127217       1 controller.go:178] quota evaluator worker shutdown
	I0314 19:15:11.127267       1 controller.go:178] quota evaluator worker shutdown
	W0314 19:15:11.959008       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0314 19:15:11.959572       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0314 19:15:12.958888       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0314 19:15:12.960209       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	E0314 19:15:13.959356       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0314 19:15:13.959590       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	W0314 19:15:14.958557       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0314 19:15:14.960257       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0314 19:15:15.958814       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0314 19:15:15.959727       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0314 19:15:16.959309       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0314 19:15:16.960214       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0314 19:15:17.958700       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	E0314 19:15:17.960414       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	E0314 19:15:18.959460       1 storage_rbac.go:187] unable to initialize clusterroles: Get "https://localhost:8443/apis/rbac.authorization.k8s.io/v1/clusterroles": dial tcp 127.0.0.1:8443: connect: connection refused
	W0314 19:15:18.959456       1 storage_scheduling.go:106] unable to get PriorityClass system-node-critical: Get "https://localhost:8443/apis/scheduling.k8s.io/v1/priorityclasses/system-node-critical": dial tcp 127.0.0.1:8443: connect: connection refused. Retrying...
	
	
	==> kube-controller-manager [79fb9ea7203ecf4708bff00a385d4462f4bcec15e6cbe794d49d01bbfdf180a6] <==
	I0314 19:15:28.170018       1 shared_informer.go:311] Waiting for caches to sync for certificate-csrsigning-legacy-unknown
	I0314 19:15:28.170110       1 dynamic_serving_content.go:132] "Starting controller" name="csr-controller::/var/lib/minikube/certs/ca.crt::/var/lib/minikube/certs/ca.key"
	I0314 19:15:28.174891       1 controllermanager.go:735] "Started controller" controller="certificatesigningrequest-cleaner-controller"
	I0314 19:15:28.175249       1 cleaner.go:83] "Starting CSR cleaner controller"
	I0314 19:15:28.178562       1 controllermanager.go:735] "Started controller" controller="clusterrole-aggregation-controller"
	I0314 19:15:28.178851       1 clusterroleaggregation_controller.go:189] "Starting ClusterRoleAggregator controller"
	I0314 19:15:28.179987       1 shared_informer.go:311] Waiting for caches to sync for ClusterRoleAggregator
	I0314 19:15:28.182925       1 controllermanager.go:735] "Started controller" controller="persistentvolume-binder-controller"
	I0314 19:15:28.183399       1 controllermanager.go:687] "Controller is disabled by a feature gate" controller="service-cidr-controller" requiredFeatureGates=["MultiCIDRServiceAllocator"]
	I0314 19:15:28.183642       1 pv_controller_base.go:319] "Starting persistent volume controller"
	I0314 19:15:28.183678       1 shared_informer.go:311] Waiting for caches to sync for persistent volume
	I0314 19:15:28.187008       1 controllermanager.go:735] "Started controller" controller="endpointslice-mirroring-controller"
	I0314 19:15:28.187205       1 endpointslicemirroring_controller.go:223] "Starting EndpointSliceMirroring controller"
	I0314 19:15:28.187422       1 shared_informer.go:311] Waiting for caches to sync for endpoint_slice_mirroring
	I0314 19:15:28.201719       1 disruption.go:433] "Sending events to api server."
	I0314 19:15:28.201794       1 disruption.go:444] "Starting disruption controller"
	I0314 19:15:28.201803       1 shared_informer.go:311] Waiting for caches to sync for disruption
	I0314 19:15:28.201869       1 controllermanager.go:735] "Started controller" controller="disruption-controller"
	I0314 19:15:28.216628       1 controllermanager.go:735] "Started controller" controller="bootstrap-signer-controller"
	I0314 19:15:28.216786       1 shared_informer.go:311] Waiting for caches to sync for bootstrap_signer
	I0314 19:15:28.232077       1 controllermanager.go:735] "Started controller" controller="token-cleaner-controller"
	I0314 19:15:28.232931       1 tokencleaner.go:112] "Starting token cleaner controller"
	I0314 19:15:28.233093       1 shared_informer.go:311] Waiting for caches to sync for token_cleaner
	I0314 19:15:28.235641       1 shared_informer.go:318] Caches are synced for token_cleaner
	I0314 19:15:28.248232       1 shared_informer.go:318] Caches are synced for tokens
	
	
	==> kube-controller-manager [a8a5ba446468a0fb69ed3e176187f480b61710c02418bb4f9c3f05f69fd8f7d3] <==
	I0314 19:15:09.131406       1 serving.go:380] Generated self-signed cert in-memory
	I0314 19:15:09.406793       1 controllermanager.go:187] "Starting" version="v1.29.0-rc.2"
	I0314 19:15:09.406865       1 controllermanager.go:189] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:15:09.408634       1 dynamic_cafile_content.go:157] "Starting controller" name="request-header::/var/lib/minikube/certs/front-proxy-ca.crt"
	I0314 19:15:09.412287       1 dynamic_cafile_content.go:157] "Starting controller" name="client-ca-bundle::/var/lib/minikube/certs/ca.crt"
	I0314 19:15:09.413108       1 secure_serving.go:213] Serving securely on 127.0.0.1:10257
	I0314 19:15:09.415246       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	
	
	==> kube-proxy [5407193dadf65cf45c0db9b89d0dfa47ce7096e01bfe800142c1ed051cdc1cc1] <==
	
	
	==> kube-proxy [ef3acff996cea6cfc7826c2cfca1b826e0236ef8dd8a1f3fe3dc6e6d67546c51] <==
	I0314 19:15:27.114372       1 server_others.go:72] "Using iptables proxy"
	I0314 19:15:27.162931       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.50.124"]
	I0314 19:15:27.251258       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0314 19:15:27.251312       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:15:27.251326       1 server_others.go:168] "Using iptables Proxier"
	I0314 19:15:27.257568       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:15:27.257919       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0314 19:15:27.258550       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:15:27.259723       1 config.go:188] "Starting service config controller"
	I0314 19:15:27.259910       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:15:27.260000       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:15:27.260082       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:15:27.260808       1 config.go:315] "Starting node config controller"
	I0314 19:15:27.260846       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:15:27.360368       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:15:27.360482       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:15:27.361095       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [b6162095f46ede76f29a487554d3f744693565c7918d1e584b31cf4c73a01411] <==
	I0314 19:15:24.214522       1 serving.go:380] Generated self-signed cert in-memory
	W0314 19:15:26.166477       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 19:15:26.166652       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 19:15:26.166800       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 19:15:26.166949       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:15:26.193781       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0314 19:15:26.193845       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:15:26.205385       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:15:26.205451       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:15:26.208904       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:15:26.211459       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:15:26.306563       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kube-scheduler [bd9ef249bea1f37cc70a4d8288f34e19500abb9e38a218d92e2608791fb7260d] <==
	I0314 19:15:09.435060       1 serving.go:380] Generated self-signed cert in-memory
	W0314 19:15:11.000372       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 19:15:11.000491       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 19:15:11.000604       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 19:15:11.000644       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:15:11.034359       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0314 19:15:11.034773       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:15:11.040090       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:15:11.040189       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:15:11.047453       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:15:11.047574       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:15:11.140542       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:15:20.129255       1 secure_serving.go:258] Stopped listening on 127.0.0.1:10259
	I0314 19:15:20.129384       1 tlsconfig.go:255] "Shutting down DynamicServingCertificateController"
	I0314 19:15:20.129675       1 configmap_cafile_content.go:223] "Shutting down controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	E0314 19:15:20.129850       1 run.go:74] "command failed" err="finished without leader elect"
	
	
	==> kubelet <==
	Mar 14 19:15:23 kubernetes-upgrade-097195 kubelet[3876]: W0314 19:15:23.043917    3876 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-097195&limit=500&resourceVersion=0": dial tcp 192.168.50.124:8443: connect: connection refused
	Mar 14 19:15:23 kubernetes-upgrade-097195 kubelet[3876]: E0314 19:15:23.043985    3876 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Node: failed to list *v1.Node: Get "https://control-plane.minikube.internal:8443/api/v1/nodes?fieldSelector=metadata.name%!D(MISSING)kubernetes-upgrade-097195&limit=500&resourceVersion=0": dial tcp 192.168.50.124:8443: connect: connection refused
	Mar 14 19:15:23 kubernetes-upgrade-097195 kubelet[3876]: W0314 19:15:23.119634    3876 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.124:8443: connect: connection refused
	Mar 14 19:15:23 kubernetes-upgrade-097195 kubelet[3876]: E0314 19:15:23.119784    3876 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get "https://control-plane.minikube.internal:8443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 192.168.50.124:8443: connect: connection refused
	Mar 14 19:15:23 kubernetes-upgrade-097195 kubelet[3876]: W0314 19:15:23.351980    3876 reflector.go:539] vendor/k8s.io/client-go/informers/factory.go:159: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.124:8443: connect: connection refused
	Mar 14 19:15:23 kubernetes-upgrade-097195 kubelet[3876]: E0314 19:15:23.352039    3876 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:159: Failed to watch *v1.Service: failed to list *v1.Service: Get "https://control-plane.minikube.internal:8443/api/v1/services?limit=500&resourceVersion=0": dial tcp 192.168.50.124:8443: connect: connection refused
	Mar 14 19:15:23 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:23.775993    3876 kubelet_node_status.go:73] "Attempting to register node" node="kubernetes-upgrade-097195"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.204380    3876 kubelet_node_status.go:112] "Node was previously registered" node="kubernetes-upgrade-097195"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.204495    3876 kubelet_node_status.go:76] "Successfully registered node" node="kubernetes-upgrade-097195"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.206442    3876 kuberuntime_manager.go:1529] "Updating runtime config through cri with podcidr" CIDR="10.244.0.0/24"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.208125    3876 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="10.244.0.0/24"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.241380    3876 apiserver.go:52] "Watching apiserver"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.245543    3876 topology_manager.go:215] "Topology Admit Handler" podUID="4eb17fcf-73a7-4f6d-b451-7285dcc53df3" podNamespace="kube-system" podName="storage-provisioner"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.245662    3876 topology_manager.go:215] "Topology Admit Handler" podUID="7f0d6300-566d-4eac-9fe2-05cebc5935c8" podNamespace="kube-system" podName="kube-proxy-8l5s8"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.245713    3876 topology_manager.go:215] "Topology Admit Handler" podUID="316a0847-3873-4cb4-aed8-4c2f1c4a15bf" podNamespace="kube-system" podName="coredns-76f75df574-4x6mw"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.245752    3876 topology_manager.go:215] "Topology Admit Handler" podUID="ad6c66e2-dc9e-4c04-90f3-e183d2297cfe" podNamespace="kube-system" podName="coredns-76f75df574-mk7kd"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.259417    3876 desired_state_of_world_populator.go:159] "Finished populating initial desired state of world"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.321843    3876 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/7f0d6300-566d-4eac-9fe2-05cebc5935c8-xtables-lock\") pod \"kube-proxy-8l5s8\" (UID: \"7f0d6300-566d-4eac-9fe2-05cebc5935c8\") " pod="kube-system/kube-proxy-8l5s8"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.321965    3876 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/host-path/4eb17fcf-73a7-4f6d-b451-7285dcc53df3-tmp\") pod \"storage-provisioner\" (UID: \"4eb17fcf-73a7-4f6d-b451-7285dcc53df3\") " pod="kube-system/storage-provisioner"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.322017    3876 reconciler_common.go:258] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/7f0d6300-566d-4eac-9fe2-05cebc5935c8-lib-modules\") pod \"kube-proxy-8l5s8\" (UID: \"7f0d6300-566d-4eac-9fe2-05cebc5935c8\") " pod="kube-system/kube-proxy-8l5s8"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.547276    3876 scope.go:117] "RemoveContainer" containerID="e75d4116262ebae897e34f95a9b9059d9d1bf5f1403c7180a1a5c238cfdd94dd"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.547610    3876 scope.go:117] "RemoveContainer" containerID="cb2637f9b9729376dfe4453004663fcd092011983641e34eff8124f4ba17fafd"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.547925    3876 scope.go:117] "RemoveContainer" containerID="5407193dadf65cf45c0db9b89d0dfa47ce7096e01bfe800142c1ed051cdc1cc1"
	Mar 14 19:15:26 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:26.550342    3876 scope.go:117] "RemoveContainer" containerID="24f500e1a28ac9a41ad3157eaa80188c8323250e7c022d4390a4418160d97488"
	Mar 14 19:15:28 kubernetes-upgrade-097195 kubelet[3876]: I0314 19:15:28.512738    3876 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness"
	
	
	==> storage-provisioner [7aa81d966614f154ddfab2b73bf6eb5011958d54b230299a3706a249a2de01d0] <==
	I0314 19:15:27.047059       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 19:15:27.086734       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 19:15:27.087232       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 19:15:27.122009       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 19:15:27.122263       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-097195_0fcaa31b-81b2-41b0-96a1-e268d453f578!
	I0314 19:15:27.135302       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"4ceaa720-d50d-4b86-beef-44d4a7732ee9", APIVersion:"v1", ResourceVersion:"406", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' kubernetes-upgrade-097195_0fcaa31b-81b2-41b0-96a1-e268d453f578 became leader
	I0314 19:15:27.223196       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_kubernetes-upgrade-097195_0fcaa31b-81b2-41b0-96a1-e268d453f578!
	
	
	==> storage-provisioner [cb2637f9b9729376dfe4453004663fcd092011983641e34eff8124f4ba17fafd] <==
	I0314 19:15:08.176683       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:15:29.849904  989433 logs.go:258] failed to output last start logs: failed to read file /home/jenkins/minikube-integration/18384-942544/.minikube/logs/lastStart.txt: bufio.Scanner: token too long

                                                
                                                
** /stderr **
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p kubernetes-upgrade-097195 -n kubernetes-upgrade-097195
helpers_test.go:261: (dbg) Run:  kubectl --context kubernetes-upgrade-097195 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:285: <<< TestKubernetesUpgrade FAILED: end of post-mortem logs <<<
helpers_test.go:286: ---------------------/post-mortem---------------------------------
helpers_test.go:175: Cleaning up "kubernetes-upgrade-097195" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubernetes-upgrade-097195
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p kubernetes-upgrade-097195: (1.230459812s)
--- FAIL: TestKubernetesUpgrade (469.68s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/FirstStart (294.76s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-968094 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:186: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-968094 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (4m54.460714591s)

                                                
                                                
-- stdout --
	* [old-k8s-version-968094] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	* Starting "old-k8s-version-968094" primary control-plane node in "old-k8s-version-968094" cluster
	* Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 19:13:54.203012  988436 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:13:54.203275  988436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:13:54.203286  988436 out.go:304] Setting ErrFile to fd 2...
	I0314 19:13:54.203290  988436 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:13:54.203485  988436 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:13:54.204081  988436 out.go:298] Setting JSON to false
	I0314 19:13:54.205130  988436 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":96986,"bootTime":1710346648,"procs":215,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:13:54.205196  988436 start.go:139] virtualization: kvm guest
	I0314 19:13:54.207533  988436 out.go:177] * [old-k8s-version-968094] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:13:54.208949  988436 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:13:54.210269  988436 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:13:54.208984  988436 notify.go:220] Checking for updates...
	I0314 19:13:54.212612  988436 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:13:54.213838  988436 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:13:54.215125  988436 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:13:54.216269  988436 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:13:54.217858  988436 config.go:182] Loaded profile config "NoKubernetes-578974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v0.0.0
	I0314 19:13:54.217977  988436 config.go:182] Loaded profile config "cert-expiration-525214": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:13:54.218083  988436 config.go:182] Loaded profile config "kubernetes-upgrade-097195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:13:54.218183  988436 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:13:55.200119  988436 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 19:13:55.201713  988436 start.go:297] selected driver: kvm2
	I0314 19:13:55.201733  988436 start.go:901] validating driver "kvm2" against <nil>
	I0314 19:13:55.201749  988436 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:13:55.202770  988436 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:13:55.202862  988436 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:13:55.219661  988436 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:13:55.219716  988436 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 19:13:55.219981  988436 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:13:55.220026  988436 cni.go:84] Creating CNI manager for ""
	I0314 19:13:55.220037  988436 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:13:55.220050  988436 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 19:13:55.220108  988436 start.go:340] cluster config:
	{Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local C
ontainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: S
SHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:13:55.220237  988436 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:13:55.221929  988436 out.go:177] * Starting "old-k8s-version-968094" primary control-plane node in "old-k8s-version-968094" cluster
	I0314 19:13:55.223255  988436 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:13:55.223299  988436 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 19:13:55.223313  988436 cache.go:56] Caching tarball of preloaded images
	I0314 19:13:55.223411  988436 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:13:55.223426  988436 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 19:13:55.223547  988436 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json ...
	I0314 19:13:55.223570  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json: {Name:mk775b9e5ec231840f7f8b92c4b92c61a3c60317 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:13:55.223721  988436 start.go:360] acquireMachinesLock for old-k8s-version-968094: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:14:14.977397  988436 start.go:364] duration metric: took 19.753633418s to acquireMachinesLock for "old-k8s-version-968094"
	I0314 19:14:14.977471  988436 start.go:93] Provisioning new machine with config: &{Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfi
g:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions
:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:14:14.977616  988436 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 19:14:14.979586  988436 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 19:14:14.979774  988436 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:14:14.979842  988436 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:14:14.996969  988436 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37201
	I0314 19:14:14.997375  988436 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:14:14.997882  988436 main.go:141] libmachine: Using API Version  1
	I0314 19:14:14.997906  988436 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:14:14.998341  988436 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:14:14.998669  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:14:14.998872  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:14.999049  988436 start.go:159] libmachine.API.Create for "old-k8s-version-968094" (driver="kvm2")
	I0314 19:14:14.999078  988436 client.go:168] LocalClient.Create starting
	I0314 19:14:14.999125  988436 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 19:14:14.999170  988436 main.go:141] libmachine: Decoding PEM data...
	I0314 19:14:14.999194  988436 main.go:141] libmachine: Parsing certificate...
	I0314 19:14:14.999299  988436 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 19:14:14.999324  988436 main.go:141] libmachine: Decoding PEM data...
	I0314 19:14:14.999335  988436 main.go:141] libmachine: Parsing certificate...
	I0314 19:14:14.999353  988436 main.go:141] libmachine: Running pre-create checks...
	I0314 19:14:14.999578  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .PreCreateCheck
	I0314 19:14:15.001322  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:14:15.001814  988436 main.go:141] libmachine: Creating machine...
	I0314 19:14:15.001831  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .Create
	I0314 19:14:15.002002  988436 main.go:141] libmachine: (old-k8s-version-968094) Creating KVM machine...
	I0314 19:14:15.003167  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found existing default KVM network
	I0314 19:14:15.004844  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.004693  988720 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:01:d2:f4} reservation:<nil>}
	I0314 19:14:15.005768  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.005684  988720 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:2b:d3:3a} reservation:<nil>}
	I0314 19:14:15.006646  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.006567  988720 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:24:75:87} reservation:<nil>}
	I0314 19:14:15.007824  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.007730  988720 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc000352480}
	I0314 19:14:15.007850  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | created network xml: 
	I0314 19:14:15.007864  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | <network>
	I0314 19:14:15.007873  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   <name>mk-old-k8s-version-968094</name>
	I0314 19:14:15.007887  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   <dns enable='no'/>
	I0314 19:14:15.007897  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   
	I0314 19:14:15.007913  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0314 19:14:15.007926  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |     <dhcp>
	I0314 19:14:15.007936  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0314 19:14:15.007946  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |     </dhcp>
	I0314 19:14:15.007954  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   </ip>
	I0314 19:14:15.007965  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG |   
	I0314 19:14:15.007974  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | </network>
	I0314 19:14:15.007980  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | 
	I0314 19:14:15.013775  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | trying to create private KVM network mk-old-k8s-version-968094 192.168.72.0/24...
	I0314 19:14:15.087225  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | private KVM network mk-old-k8s-version-968094 192.168.72.0/24 created
	I0314 19:14:15.087259  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.087173  988720 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:14:15.087278  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094 ...
	I0314 19:14:15.087297  988436 main.go:141] libmachine: (old-k8s-version-968094) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 19:14:15.087352  988436 main.go:141] libmachine: (old-k8s-version-968094) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 19:14:15.334181  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.334042  988720 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa...
	I0314 19:14:15.502173  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.502014  988720 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/old-k8s-version-968094.rawdisk...
	I0314 19:14:15.502210  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Writing magic tar header
	I0314 19:14:15.502233  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Writing SSH key tar header
	I0314 19:14:15.502248  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:15.502195  988720 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094 ...
	I0314 19:14:15.502340  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094
	I0314 19:14:15.502378  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094 (perms=drwx------)
	I0314 19:14:15.502394  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 19:14:15.502415  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:14:15.502428  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 19:14:15.502444  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 19:14:15.502456  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 19:14:15.502473  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 19:14:15.502487  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home/jenkins
	I0314 19:14:15.502501  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 19:14:15.502516  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 19:14:15.502527  988436 main.go:141] libmachine: (old-k8s-version-968094) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 19:14:15.502542  988436 main.go:141] libmachine: (old-k8s-version-968094) Creating domain...
	I0314 19:14:15.502553  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Checking permissions on dir: /home
	I0314 19:14:15.502573  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Skipping /home - not owner
	I0314 19:14:15.503897  988436 main.go:141] libmachine: (old-k8s-version-968094) define libvirt domain using xml: 
	I0314 19:14:15.503919  988436 main.go:141] libmachine: (old-k8s-version-968094) <domain type='kvm'>
	I0314 19:14:15.503926  988436 main.go:141] libmachine: (old-k8s-version-968094)   <name>old-k8s-version-968094</name>
	I0314 19:14:15.503932  988436 main.go:141] libmachine: (old-k8s-version-968094)   <memory unit='MiB'>2200</memory>
	I0314 19:14:15.503938  988436 main.go:141] libmachine: (old-k8s-version-968094)   <vcpu>2</vcpu>
	I0314 19:14:15.503942  988436 main.go:141] libmachine: (old-k8s-version-968094)   <features>
	I0314 19:14:15.503950  988436 main.go:141] libmachine: (old-k8s-version-968094)     <acpi/>
	I0314 19:14:15.503955  988436 main.go:141] libmachine: (old-k8s-version-968094)     <apic/>
	I0314 19:14:15.503960  988436 main.go:141] libmachine: (old-k8s-version-968094)     <pae/>
	I0314 19:14:15.503965  988436 main.go:141] libmachine: (old-k8s-version-968094)     
	I0314 19:14:15.503970  988436 main.go:141] libmachine: (old-k8s-version-968094)   </features>
	I0314 19:14:15.503977  988436 main.go:141] libmachine: (old-k8s-version-968094)   <cpu mode='host-passthrough'>
	I0314 19:14:15.503983  988436 main.go:141] libmachine: (old-k8s-version-968094)   
	I0314 19:14:15.503988  988436 main.go:141] libmachine: (old-k8s-version-968094)   </cpu>
	I0314 19:14:15.503994  988436 main.go:141] libmachine: (old-k8s-version-968094)   <os>
	I0314 19:14:15.504002  988436 main.go:141] libmachine: (old-k8s-version-968094)     <type>hvm</type>
	I0314 19:14:15.504008  988436 main.go:141] libmachine: (old-k8s-version-968094)     <boot dev='cdrom'/>
	I0314 19:14:15.504019  988436 main.go:141] libmachine: (old-k8s-version-968094)     <boot dev='hd'/>
	I0314 19:14:15.504027  988436 main.go:141] libmachine: (old-k8s-version-968094)     <bootmenu enable='no'/>
	I0314 19:14:15.504032  988436 main.go:141] libmachine: (old-k8s-version-968094)   </os>
	I0314 19:14:15.504038  988436 main.go:141] libmachine: (old-k8s-version-968094)   <devices>
	I0314 19:14:15.504043  988436 main.go:141] libmachine: (old-k8s-version-968094)     <disk type='file' device='cdrom'>
	I0314 19:14:15.504053  988436 main.go:141] libmachine: (old-k8s-version-968094)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/boot2docker.iso'/>
	I0314 19:14:15.504064  988436 main.go:141] libmachine: (old-k8s-version-968094)       <target dev='hdc' bus='scsi'/>
	I0314 19:14:15.504076  988436 main.go:141] libmachine: (old-k8s-version-968094)       <readonly/>
	I0314 19:14:15.504087  988436 main.go:141] libmachine: (old-k8s-version-968094)     </disk>
	I0314 19:14:15.504150  988436 main.go:141] libmachine: (old-k8s-version-968094)     <disk type='file' device='disk'>
	I0314 19:14:15.504178  988436 main.go:141] libmachine: (old-k8s-version-968094)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 19:14:15.504193  988436 main.go:141] libmachine: (old-k8s-version-968094)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/old-k8s-version-968094.rawdisk'/>
	I0314 19:14:15.504202  988436 main.go:141] libmachine: (old-k8s-version-968094)       <target dev='hda' bus='virtio'/>
	I0314 19:14:15.504234  988436 main.go:141] libmachine: (old-k8s-version-968094)     </disk>
	I0314 19:14:15.504250  988436 main.go:141] libmachine: (old-k8s-version-968094)     <interface type='network'>
	I0314 19:14:15.504265  988436 main.go:141] libmachine: (old-k8s-version-968094)       <source network='mk-old-k8s-version-968094'/>
	I0314 19:14:15.504275  988436 main.go:141] libmachine: (old-k8s-version-968094)       <model type='virtio'/>
	I0314 19:14:15.504281  988436 main.go:141] libmachine: (old-k8s-version-968094)     </interface>
	I0314 19:14:15.504289  988436 main.go:141] libmachine: (old-k8s-version-968094)     <interface type='network'>
	I0314 19:14:15.504296  988436 main.go:141] libmachine: (old-k8s-version-968094)       <source network='default'/>
	I0314 19:14:15.504304  988436 main.go:141] libmachine: (old-k8s-version-968094)       <model type='virtio'/>
	I0314 19:14:15.504314  988436 main.go:141] libmachine: (old-k8s-version-968094)     </interface>
	I0314 19:14:15.504326  988436 main.go:141] libmachine: (old-k8s-version-968094)     <serial type='pty'>
	I0314 19:14:15.504337  988436 main.go:141] libmachine: (old-k8s-version-968094)       <target port='0'/>
	I0314 19:14:15.504350  988436 main.go:141] libmachine: (old-k8s-version-968094)     </serial>
	I0314 19:14:15.504360  988436 main.go:141] libmachine: (old-k8s-version-968094)     <console type='pty'>
	I0314 19:14:15.504366  988436 main.go:141] libmachine: (old-k8s-version-968094)       <target type='serial' port='0'/>
	I0314 19:14:15.504373  988436 main.go:141] libmachine: (old-k8s-version-968094)     </console>
	I0314 19:14:15.504379  988436 main.go:141] libmachine: (old-k8s-version-968094)     <rng model='virtio'>
	I0314 19:14:15.504387  988436 main.go:141] libmachine: (old-k8s-version-968094)       <backend model='random'>/dev/random</backend>
	I0314 19:14:15.504394  988436 main.go:141] libmachine: (old-k8s-version-968094)     </rng>
	I0314 19:14:15.504405  988436 main.go:141] libmachine: (old-k8s-version-968094)     
	I0314 19:14:15.504419  988436 main.go:141] libmachine: (old-k8s-version-968094)     
	I0314 19:14:15.504434  988436 main.go:141] libmachine: (old-k8s-version-968094)   </devices>
	I0314 19:14:15.504472  988436 main.go:141] libmachine: (old-k8s-version-968094) </domain>
	I0314 19:14:15.504497  988436 main.go:141] libmachine: (old-k8s-version-968094) 
	I0314 19:14:15.509914  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:11:fb:7b in network default
	I0314 19:14:15.510536  988436 main.go:141] libmachine: (old-k8s-version-968094) Ensuring networks are active...
	I0314 19:14:15.510563  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:15.511299  988436 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network default is active
	I0314 19:14:15.511673  988436 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network mk-old-k8s-version-968094 is active
	I0314 19:14:15.512297  988436 main.go:141] libmachine: (old-k8s-version-968094) Getting domain xml...
	I0314 19:14:15.513015  988436 main.go:141] libmachine: (old-k8s-version-968094) Creating domain...
	I0314 19:14:16.838508  988436 main.go:141] libmachine: (old-k8s-version-968094) Waiting to get IP...
	I0314 19:14:16.839093  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:16.839434  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:16.839465  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:16.839422  988720 retry.go:31] will retry after 280.866203ms: waiting for machine to come up
	I0314 19:14:17.122181  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:17.123635  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:17.123664  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:17.123570  988720 retry.go:31] will retry after 276.458753ms: waiting for machine to come up
	I0314 19:14:17.785603  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:17.786111  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:17.786168  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:17.786093  988720 retry.go:31] will retry after 389.166315ms: waiting for machine to come up
	I0314 19:14:18.176561  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:18.177070  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:18.177106  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:18.177023  988720 retry.go:31] will retry after 380.752529ms: waiting for machine to come up
	I0314 19:14:18.559815  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:18.560949  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:18.560983  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:18.560890  988720 retry.go:31] will retry after 727.786586ms: waiting for machine to come up
	I0314 19:14:19.290657  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:19.291148  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:19.291182  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:19.291092  988720 retry.go:31] will retry after 821.899642ms: waiting for machine to come up
	I0314 19:14:20.114550  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:20.115072  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:20.115105  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:20.115013  988720 retry.go:31] will retry after 966.170572ms: waiting for machine to come up
	I0314 19:14:21.083182  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:21.083774  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:21.083801  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:21.083727  988720 retry.go:31] will retry after 1.076047079s: waiting for machine to come up
	I0314 19:14:22.161652  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:22.162151  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:22.162182  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:22.162089  988720 retry.go:31] will retry after 1.501351238s: waiting for machine to come up
	I0314 19:14:23.665996  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:23.666527  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:23.666563  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:23.666483  988720 retry.go:31] will retry after 1.978163759s: waiting for machine to come up
	I0314 19:14:25.646944  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:25.647516  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:25.647545  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:25.647467  988720 retry.go:31] will retry after 2.901646032s: waiting for machine to come up
	I0314 19:14:28.552868  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:28.553474  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:28.553508  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:28.553406  988720 retry.go:31] will retry after 3.528157438s: waiting for machine to come up
	I0314 19:14:32.084474  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:32.084898  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:32.084931  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:32.084846  988720 retry.go:31] will retry after 4.015658963s: waiting for machine to come up
	I0314 19:14:36.102233  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:36.102603  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:14:36.102632  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:14:36.102545  988720 retry.go:31] will retry after 5.130164013s: waiting for machine to come up
	I0314 19:14:41.233833  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.234521  988436 main.go:141] libmachine: (old-k8s-version-968094) Found IP for machine: 192.168.72.211
	I0314 19:14:41.234548  988436 main.go:141] libmachine: (old-k8s-version-968094) Reserving static IP address...
	I0314 19:14:41.234559  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has current primary IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.234918  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"} in network mk-old-k8s-version-968094
	I0314 19:14:41.311619  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Getting to WaitForSSH function...
	I0314 19:14:41.311651  988436 main.go:141] libmachine: (old-k8s-version-968094) Reserved static IP address: 192.168.72.211
	I0314 19:14:41.311664  988436 main.go:141] libmachine: (old-k8s-version-968094) Waiting for SSH to be available...
	I0314 19:14:41.314272  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.314865  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:minikube Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.314905  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.315027  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH client type: external
	I0314 19:14:41.315064  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa (-rw-------)
	I0314 19:14:41.315099  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:14:41.315121  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | About to run SSH command:
	I0314 19:14:41.315135  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | exit 0
	I0314 19:14:41.440131  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | SSH cmd err, output: <nil>: 
	I0314 19:14:41.440410  988436 main.go:141] libmachine: (old-k8s-version-968094) KVM machine creation complete!
	I0314 19:14:41.440739  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:14:41.441338  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:41.441540  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:41.441683  988436 main.go:141] libmachine: Waiting for machine to be running, this may take a few minutes...
	I0314 19:14:41.441703  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetState
	I0314 19:14:41.443149  988436 main.go:141] libmachine: Detecting operating system of created instance...
	I0314 19:14:41.443165  988436 main.go:141] libmachine: Waiting for SSH to be available...
	I0314 19:14:41.443173  988436 main.go:141] libmachine: Getting to WaitForSSH function...
	I0314 19:14:41.443179  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:41.445819  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.446175  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.446211  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.446343  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:41.446536  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.446702  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.446836  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:41.447031  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:41.447292  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:41.447309  988436 main.go:141] libmachine: About to run SSH command:
	exit 0
	I0314 19:14:41.555625  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:14:41.555651  988436 main.go:141] libmachine: Detecting the provisioner...
	I0314 19:14:41.555661  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:41.558680  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.559201  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.559234  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.559380  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:41.559603  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.559772  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.559895  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:41.560034  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:41.560224  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:41.560241  988436 main.go:141] libmachine: About to run SSH command:
	cat /etc/os-release
	I0314 19:14:41.669043  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: NAME=Buildroot
	VERSION=2023.02.9-dirty
	ID=buildroot
	VERSION_ID=2023.02.9
	PRETTY_NAME="Buildroot 2023.02.9"
	
	I0314 19:14:41.669119  988436 main.go:141] libmachine: found compatible host: buildroot
	I0314 19:14:41.669132  988436 main.go:141] libmachine: Provisioning with buildroot...
	I0314 19:14:41.669141  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:14:41.669386  988436 buildroot.go:166] provisioning hostname "old-k8s-version-968094"
	I0314 19:14:41.669423  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:14:41.669617  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:41.672459  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.672874  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.672902  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.673013  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:41.673213  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.673399  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.673528  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:41.673671  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:41.673846  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:41.673859  988436 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-968094 && echo "old-k8s-version-968094" | sudo tee /etc/hostname
	I0314 19:14:41.802301  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-968094
	
	I0314 19:14:41.802339  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:41.806468  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.806904  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.806938  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.807147  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:41.807366  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.807517  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:41.807661  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:41.807812  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:41.808045  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:41.808067  988436 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-968094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-968094/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-968094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:14:41.926611  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:14:41.926646  988436 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:14:41.926706  988436 buildroot.go:174] setting up certificates
	I0314 19:14:41.926720  988436 provision.go:84] configureAuth start
	I0314 19:14:41.926737  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:14:41.927071  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:14:41.929859  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.930239  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.930276  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.930391  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:41.932830  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.933165  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:41.933194  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:41.933295  988436 provision.go:143] copyHostCerts
	I0314 19:14:41.933367  988436 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:14:41.933378  988436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:14:41.933440  988436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:14:41.933566  988436 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:14:41.933579  988436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:14:41.933612  988436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:14:41.933689  988436 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:14:41.933699  988436 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:14:41.933726  988436 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:14:41.933797  988436 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-968094 san=[127.0.0.1 192.168.72.211 localhost minikube old-k8s-version-968094]
	I0314 19:14:42.182763  988436 provision.go:177] copyRemoteCerts
	I0314 19:14:42.182826  988436 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:14:42.182857  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.185712  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.186035  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.186066  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.186285  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.186534  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.186705  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.186852  988436 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:14:42.270889  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 19:14:42.298651  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:14:42.325621  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:14:42.352639  988436 provision.go:87] duration metric: took 425.90336ms to configureAuth
	I0314 19:14:42.352669  988436 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:14:42.352875  988436 config.go:182] Loaded profile config "old-k8s-version-968094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:14:42.352965  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.355723  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.356124  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.356146  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.356341  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.356546  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.356722  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.356840  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.357004  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:42.357172  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:42.357189  988436 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:14:42.642868  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:14:42.642901  988436 main.go:141] libmachine: Checking connection to Docker...
	I0314 19:14:42.642912  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetURL
	I0314 19:14:42.644372  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using libvirt version 6000000
	I0314 19:14:42.647311  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.647702  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.647731  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.647954  988436 main.go:141] libmachine: Docker is up and running!
	I0314 19:14:42.647972  988436 main.go:141] libmachine: Reticulating splines...
	I0314 19:14:42.647981  988436 client.go:171] duration metric: took 27.648891803s to LocalClient.Create
	I0314 19:14:42.648014  988436 start.go:167] duration metric: took 27.648964706s to libmachine.API.Create "old-k8s-version-968094"
	I0314 19:14:42.648041  988436 start.go:293] postStartSetup for "old-k8s-version-968094" (driver="kvm2")
	I0314 19:14:42.648055  988436 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:14:42.648075  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:42.648374  988436 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:14:42.648405  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.650521  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.650849  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.650879  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.650950  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.651149  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.651350  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.651499  988436 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:14:42.735116  988436 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:14:42.740327  988436 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:14:42.740353  988436 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:14:42.740417  988436 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:14:42.740498  988436 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:14:42.740607  988436 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:14:42.750455  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:14:42.777266  988436 start.go:296] duration metric: took 129.209637ms for postStartSetup
	I0314 19:14:42.777327  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:14:42.778098  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:14:42.781186  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.781563  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.781594  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.781820  988436 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json ...
	I0314 19:14:42.782042  988436 start.go:128] duration metric: took 27.804406667s to createHost
	I0314 19:14:42.782072  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.784251  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.784637  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.784668  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.784784  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.785002  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.785154  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.785274  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.785446  988436 main.go:141] libmachine: Using SSH client type: native
	I0314 19:14:42.785606  988436 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:14:42.785617  988436 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 19:14:42.897283  988436 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710443682.880012035
	
	I0314 19:14:42.897308  988436 fix.go:216] guest clock: 1710443682.880012035
	I0314 19:14:42.897317  988436 fix.go:229] Guest: 2024-03-14 19:14:42.880012035 +0000 UTC Remote: 2024-03-14 19:14:42.78205842 +0000 UTC m=+48.630554959 (delta=97.953615ms)
	I0314 19:14:42.897367  988436 fix.go:200] guest clock delta is within tolerance: 97.953615ms
	I0314 19:14:42.897374  988436 start.go:83] releasing machines lock for "old-k8s-version-968094", held for 27.91994048s
	I0314 19:14:42.897416  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:42.897723  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:14:42.900723  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.901123  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.901151  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.901341  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:42.901889  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:42.902093  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:14:42.902180  988436 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:14:42.902248  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.902358  988436 ssh_runner.go:195] Run: cat /version.json
	I0314 19:14:42.902387  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:14:42.905112  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.905330  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.905535  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.905578  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.905671  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.905808  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:42.905848  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:42.905853  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.906019  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.906067  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:14:42.906174  988436 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:14:42.906247  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:14:42.906407  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:14:42.906529  988436 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:14:42.990403  988436 ssh_runner.go:195] Run: systemctl --version
	I0314 19:14:43.019894  988436 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:14:43.181275  988436 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:14:43.188411  988436 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:14:43.188469  988436 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:14:43.207426  988436 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:14:43.207460  988436 start.go:494] detecting cgroup driver to use...
	I0314 19:14:43.207540  988436 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:14:43.227408  988436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:14:43.244322  988436 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:14:43.244390  988436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:14:43.259914  988436 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:14:43.274439  988436 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:14:43.400707  988436 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:14:43.553766  988436 docker.go:233] disabling docker service ...
	I0314 19:14:43.553836  988436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:14:43.573799  988436 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:14:43.592832  988436 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:14:43.747673  988436 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:14:43.893550  988436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:14:43.911407  988436 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:14:43.932872  988436 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 19:14:43.932936  988436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:43.945499  988436 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:14:43.945550  988436 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:43.958055  988436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:43.970771  988436 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:14:43.983243  988436 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:14:43.995576  988436 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:14:44.006455  988436 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:14:44.006510  988436 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:14:44.021079  988436 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:14:44.033636  988436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:14:44.197240  988436 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:14:44.339817  988436 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:14:44.339902  988436 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:14:44.345568  988436 start.go:562] Will wait 60s for crictl version
	I0314 19:14:44.345619  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:44.350149  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:14:44.396503  988436 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:14:44.396591  988436 ssh_runner.go:195] Run: crio --version
	I0314 19:14:44.430497  988436 ssh_runner.go:195] Run: crio --version
	I0314 19:14:44.461214  988436 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 19:14:44.462594  988436 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:14:44.465246  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:44.465634  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:14:44.465666  988436 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:14:44.465826  988436 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 19:14:44.470671  988436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:14:44.484305  988436 kubeadm.go:877] updating cluster {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] M
ountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:14:44.484460  988436 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:14:44.484509  988436 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:14:44.516937  988436 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:14:44.517028  988436 ssh_runner.go:195] Run: which lz4
	I0314 19:14:44.521485  988436 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0314 19:14:44.526625  988436 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:14:44.526663  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 19:14:46.456066  988436 crio.go:444] duration metric: took 1.93461575s to copy over tarball
	I0314 19:14:46.456155  988436 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:14:49.494219  988436 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.038019069s)
	I0314 19:14:49.494250  988436 crio.go:451] duration metric: took 3.038150607s to extract the tarball
	I0314 19:14:49.494258  988436 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:14:49.539603  988436 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:14:49.599773  988436 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:14:49.599810  988436 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:14:49.599941  988436 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 19:14:49.599966  988436 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:14:49.599924  988436 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:14:49.599981  988436 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:14:49.599911  988436 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:14:49.600074  988436 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 19:14:49.600089  988436 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:14:49.599915  988436 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:14:49.601714  988436 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:14:49.601723  988436 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:14:49.601724  988436 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 19:14:49.601742  988436 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:14:49.601714  988436 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 19:14:49.601714  988436 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:14:49.601714  988436 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:14:49.602122  988436 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:14:49.749995  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 19:14:49.749995  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 19:14:49.760203  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:14:49.761697  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:14:49.777652  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:14:49.797979  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:14:49.819214  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 19:14:49.865166  988436 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 19:14:49.865239  988436 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:14:49.865275  988436 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 19:14:49.865304  988436 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 19:14:49.865319  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:49.865332  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:49.922233  988436 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:14:49.942607  988436 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 19:14:49.942662  988436 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:14:49.942720  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:49.954508  988436 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 19:14:49.954619  988436 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:14:49.954671  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:49.954534  988436 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 19:14:49.954702  988436 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:14:49.954751  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:50.013099  988436 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 19:14:50.013154  988436 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:14:50.013171  988436 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 19:14:50.013205  988436 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 19:14:50.013209  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:50.013228  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 19:14:50.013243  988436 ssh_runner.go:195] Run: which crictl
	I0314 19:14:50.013327  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 19:14:50.149874  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:14:50.149946  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:14:50.149985  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:14:50.150036  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:14:50.150086  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 19:14:50.150159  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 19:14:50.150203  988436 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 19:14:50.235991  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 19:14:50.259400  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 19:14:50.266436  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 19:14:50.266551  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 19:14:50.266635  988436 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 19:14:50.266695  988436 cache_images.go:92] duration metric: took 666.868199ms to LoadCachedImages
	W0314 19:14:50.266776  988436 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0: no such file or directory
	I0314 19:14:50.266792  988436 kubeadm.go:928] updating node { 192.168.72.211 8443 v1.20.0 crio true true} ...
	I0314 19:14:50.266959  988436 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-968094 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:14:50.267049  988436 ssh_runner.go:195] Run: crio config
	I0314 19:14:50.319163  988436 cni.go:84] Creating CNI manager for ""
	I0314 19:14:50.319196  988436 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:14:50.319219  988436 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:14:50.319247  988436 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-968094 NodeName:old-k8s-version-968094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 19:14:50.319462  988436 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-968094"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:14:50.319554  988436 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 19:14:50.332194  988436 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:14:50.332289  988436 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:14:50.344452  988436 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0314 19:14:50.365405  988436 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:14:50.387571  988436 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0314 19:14:50.406541  988436 ssh_runner.go:195] Run: grep 192.168.72.211	control-plane.minikube.internal$ /etc/hosts
	I0314 19:14:50.410989  988436 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:14:50.425781  988436 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:14:50.574672  988436 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:14:50.595348  988436 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094 for IP: 192.168.72.211
	I0314 19:14:50.595393  988436 certs.go:194] generating shared ca certs ...
	I0314 19:14:50.595448  988436 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:50.595631  988436 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:14:50.595675  988436 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:14:50.595685  988436 certs.go:256] generating profile certs ...
	I0314 19:14:50.595751  988436 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key
	I0314 19:14:50.595766  988436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.crt with IP's: []
	I0314 19:14:50.961687  988436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.crt ...
	I0314 19:14:50.961725  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.crt: {Name:mk462f1a561c7e853d83f1337f12dd54e1b11a10 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:50.961917  988436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key ...
	I0314 19:14:50.961937  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key: {Name:mk13cbe9bade4e0c0e1e8edb424f78800ee373b7 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:50.962043  988436 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff
	I0314 19:14:50.962064  988436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt.8692dcff with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.211]
	I0314 19:14:51.207477  988436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt.8692dcff ...
	I0314 19:14:51.207520  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt.8692dcff: {Name:mkf44ad6b646a50ac6bf1e23895fadd371a28f91 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:51.209199  988436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff ...
	I0314 19:14:51.209232  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff: {Name:mkcc29eb399e964534b152a6b8e9d73e64611845 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:51.209378  988436 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt.8692dcff -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt
	I0314 19:14:51.209493  988436 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key
	I0314 19:14:51.209583  988436 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key
	I0314 19:14:51.209610  988436 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt with IP's: []
	I0314 19:14:51.305439  988436 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt ...
	I0314 19:14:51.305472  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt: {Name:mk17654c8a0256296f6afb11e28f678027798ce1 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:51.324122  988436 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key ...
	I0314 19:14:51.324154  988436 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key: {Name:mk04faffa245e09a31a95155ce70b6b328f12ffd Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:14:51.324471  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:14:51.324528  988436 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:14:51.324544  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:14:51.324575  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:14:51.324606  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:14:51.324637  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:14:51.324702  988436 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:14:51.325477  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:14:51.354119  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:14:51.380946  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:14:51.411789  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:14:51.458353  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 19:14:51.483171  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:14:51.520147  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:14:51.548815  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:14:51.604141  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:14:51.632783  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:14:51.660399  988436 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:14:51.687968  988436 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:14:51.707887  988436 ssh_runner.go:195] Run: openssl version
	I0314 19:14:51.714430  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:14:51.726481  988436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:14:51.731724  988436 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:14:51.731802  988436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:14:51.738051  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:14:51.749973  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:14:51.763098  988436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:14:51.768588  988436 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:14:51.768668  988436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:14:51.775323  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:14:51.787855  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:14:51.800610  988436 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:14:51.805963  988436 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:14:51.806041  988436 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:14:51.813261  988436 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:14:51.827741  988436 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:14:51.832613  988436 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:14:51.832672  988436 kubeadm.go:391] StartCluster: {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] Moun
tPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:14:51.832794  988436 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:14:51.832852  988436 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:14:51.878164  988436 cri.go:89] found id: ""
	I0314 19:14:51.878252  988436 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 19:14:51.892635  988436 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:14:51.903863  988436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:14:51.917871  988436 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:14:51.917895  988436 kubeadm.go:156] found existing configuration files:
	
	I0314 19:14:51.917936  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:14:51.931245  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:14:51.931311  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:14:51.944845  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:14:51.956603  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:14:51.956662  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:14:51.968586  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:14:51.981838  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:14:51.981904  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:14:51.995108  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:14:52.006432  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:14:52.006578  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:14:52.019652  988436 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:14:52.181954  988436 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:14:52.182145  988436 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:14:52.399528  988436 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:14:52.399727  988436 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:14:52.399901  988436 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:14:52.638472  988436 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:14:52.640883  988436 out.go:204]   - Generating certificates and keys ...
	I0314 19:14:52.640989  988436 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:14:52.641067  988436 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:14:52.764665  988436 kubeadm.go:309] [certs] Generating "apiserver-kubelet-client" certificate and key
	I0314 19:14:52.875694  988436 kubeadm.go:309] [certs] Generating "front-proxy-ca" certificate and key
	I0314 19:14:53.088130  988436 kubeadm.go:309] [certs] Generating "front-proxy-client" certificate and key
	I0314 19:14:53.299437  988436 kubeadm.go:309] [certs] Generating "etcd/ca" certificate and key
	I0314 19:14:53.592735  988436 kubeadm.go:309] [certs] Generating "etcd/server" certificate and key
	I0314 19:14:53.593022  988436 kubeadm.go:309] [certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-968094] and IPs [192.168.72.211 127.0.0.1 ::1]
	I0314 19:14:53.727136  988436 kubeadm.go:309] [certs] Generating "etcd/peer" certificate and key
	I0314 19:14:53.727390  988436 kubeadm.go:309] [certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-968094] and IPs [192.168.72.211 127.0.0.1 ::1]
	I0314 19:14:53.898136  988436 kubeadm.go:309] [certs] Generating "etcd/healthcheck-client" certificate and key
	I0314 19:14:53.986921  988436 kubeadm.go:309] [certs] Generating "apiserver-etcd-client" certificate and key
	I0314 19:14:54.116669  988436 kubeadm.go:309] [certs] Generating "sa" key and public key
	I0314 19:14:54.117618  988436 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:14:54.265449  988436 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:14:54.426924  988436 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:14:54.691508  988436 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:14:54.890034  988436 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:14:54.919278  988436 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:14:54.919408  988436 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:14:54.920348  988436 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:14:55.126092  988436 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:14:55.127662  988436 out.go:204]   - Booting up control plane ...
	I0314 19:14:55.127792  988436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:14:55.137884  988436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:14:55.139279  988436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:14:55.140805  988436 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:14:55.148289  988436 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:15:35.145794  988436 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:15:35.145917  988436 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:15:35.146198  988436 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:15:40.146846  988436 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:15:40.147134  988436 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:15:50.148070  988436 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:15:50.148322  988436 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:16:10.149627  988436 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:16:10.150257  988436 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:16:50.150204  988436 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:16:50.150544  988436 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:16:50.150813  988436 kubeadm.go:309] 
	I0314 19:16:50.150891  988436 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:16:50.151362  988436 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:16:50.151394  988436 kubeadm.go:309] 
	I0314 19:16:50.151434  988436 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:16:50.151489  988436 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:16:50.151672  988436 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:16:50.151693  988436 kubeadm.go:309] 
	I0314 19:16:50.151834  988436 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:16:50.151883  988436 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:16:50.151960  988436 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:16:50.151988  988436 kubeadm.go:309] 
	I0314 19:16:50.152160  988436 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:16:50.152293  988436 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:16:50.152305  988436 kubeadm.go:309] 
	I0314 19:16:50.152460  988436 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:16:50.152563  988436 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:16:50.152657  988436 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:16:50.152765  988436 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:16:50.152783  988436 kubeadm.go:309] 
	I0314 19:16:50.155410  988436 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:16:50.155543  988436 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:16:50.155641  988436 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	W0314 19:16:50.155846  988436 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-968094] and IPs [192.168.72.211 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-968094] and IPs [192.168.72.211 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Generating "apiserver-kubelet-client" certificate and key
	[certs] Generating "front-proxy-ca" certificate and key
	[certs] Generating "front-proxy-client" certificate and key
	[certs] Generating "etcd/ca" certificate and key
	[certs] Generating "etcd/server" certificate and key
	[certs] etcd/server serving cert is signed for DNS names [localhost old-k8s-version-968094] and IPs [192.168.72.211 127.0.0.1 ::1]
	[certs] Generating "etcd/peer" certificate and key
	[certs] etcd/peer serving cert is signed for DNS names [localhost old-k8s-version-968094] and IPs [192.168.72.211 127.0.0.1 ::1]
	[certs] Generating "etcd/healthcheck-client" certificate and key
	[certs] Generating "apiserver-etcd-client" certificate and key
	[certs] Generating "sa" key and public key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 19:16:50.155909  988436 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:16:51.407071  988436 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.251129296s)
	I0314 19:16:51.407163  988436 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:16:51.424006  988436 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:16:51.434854  988436 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:16:51.434876  988436 kubeadm.go:156] found existing configuration files:
	
	I0314 19:16:51.434923  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:16:51.445124  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:16:51.445183  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:16:51.455566  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:16:51.465135  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:16:51.465189  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:16:51.475110  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:16:51.485763  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:16:51.485810  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:16:51.496117  988436 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:16:51.506699  988436 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:16:51.506758  988436 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:16:51.516477  988436 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:16:51.590549  988436 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:16:51.590634  988436 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:16:51.760561  988436 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:16:51.760680  988436 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:16:51.760801  988436 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:16:51.957979  988436 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:16:51.959959  988436 out.go:204]   - Generating certificates and keys ...
	I0314 19:16:51.960052  988436 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:16:51.960159  988436 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:16:51.960306  988436 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:16:51.960398  988436 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:16:51.960507  988436 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:16:51.960633  988436 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:16:51.961258  988436 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:16:51.961656  988436 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:16:51.962128  988436 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:16:51.962582  988436 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:16:51.962651  988436 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:16:51.962743  988436 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:16:52.078598  988436 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:16:52.339527  988436 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:16:52.623486  988436 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:16:52.767984  988436 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:16:52.787498  988436 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:16:52.788979  988436 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:16:52.789211  988436 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:16:52.951283  988436 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:16:52.953237  988436 out.go:204]   - Booting up control plane ...
	I0314 19:16:52.953375  988436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:16:52.968262  988436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:16:52.968407  988436 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:16:52.968894  988436 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:16:52.971585  988436 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:17:32.973944  988436 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:17:32.974090  988436 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:17:32.974371  988436 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:17:37.974205  988436 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:17:37.974440  988436 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:17:47.975157  988436 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:17:47.975418  988436 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:18:07.976719  988436 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:18:07.976924  988436 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:18:47.976368  988436 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:18:47.976614  988436 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:18:47.976640  988436 kubeadm.go:309] 
	I0314 19:18:47.976686  988436 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:18:47.976759  988436 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:18:47.976776  988436 kubeadm.go:309] 
	I0314 19:18:47.976829  988436 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:18:47.976878  988436 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:18:47.977023  988436 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:18:47.977036  988436 kubeadm.go:309] 
	I0314 19:18:47.977139  988436 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:18:47.977191  988436 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:18:47.977245  988436 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:18:47.977259  988436 kubeadm.go:309] 
	I0314 19:18:47.977371  988436 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:18:47.977463  988436 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:18:47.977472  988436 kubeadm.go:309] 
	I0314 19:18:47.977618  988436 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:18:47.977747  988436 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:18:47.977873  988436 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:18:47.977967  988436 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:18:47.977978  988436 kubeadm.go:309] 
	I0314 19:18:47.979252  988436 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:18:47.979370  988436 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:18:47.979466  988436 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:18:47.979553  988436 kubeadm.go:393] duration metric: took 3m56.146886343s to StartCluster
	I0314 19:18:47.979606  988436 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:18:47.979672  988436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:18:48.027209  988436 cri.go:89] found id: ""
	I0314 19:18:48.027249  988436 logs.go:276] 0 containers: []
	W0314 19:18:48.027262  988436 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:18:48.027270  988436 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:18:48.027343  988436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:18:48.065354  988436 cri.go:89] found id: ""
	I0314 19:18:48.065388  988436 logs.go:276] 0 containers: []
	W0314 19:18:48.065400  988436 logs.go:278] No container was found matching "etcd"
	I0314 19:18:48.065408  988436 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:18:48.065484  988436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:18:48.105266  988436 cri.go:89] found id: ""
	I0314 19:18:48.105300  988436 logs.go:276] 0 containers: []
	W0314 19:18:48.105313  988436 logs.go:278] No container was found matching "coredns"
	I0314 19:18:48.105322  988436 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:18:48.105393  988436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:18:48.145934  988436 cri.go:89] found id: ""
	I0314 19:18:48.145965  988436 logs.go:276] 0 containers: []
	W0314 19:18:48.145973  988436 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:18:48.145980  988436 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:18:48.146031  988436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:18:48.180977  988436 cri.go:89] found id: ""
	I0314 19:18:48.181010  988436 logs.go:276] 0 containers: []
	W0314 19:18:48.181020  988436 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:18:48.181028  988436 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:18:48.181089  988436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:18:48.219151  988436 cri.go:89] found id: ""
	I0314 19:18:48.219182  988436 logs.go:276] 0 containers: []
	W0314 19:18:48.219192  988436 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:18:48.219199  988436 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:18:48.219255  988436 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:18:48.264715  988436 cri.go:89] found id: ""
	I0314 19:18:48.264737  988436 logs.go:276] 0 containers: []
	W0314 19:18:48.264745  988436 logs.go:278] No container was found matching "kindnet"
	I0314 19:18:48.264761  988436 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:18:48.264776  988436 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:18:48.376115  988436 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:18:48.376137  988436 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:18:48.376152  988436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:18:48.475719  988436 logs.go:123] Gathering logs for container status ...
	I0314 19:18:48.475765  988436 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:18:48.518520  988436 logs.go:123] Gathering logs for kubelet ...
	I0314 19:18:48.518564  988436 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:18:48.569420  988436 logs.go:123] Gathering logs for dmesg ...
	I0314 19:18:48.569457  988436 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	W0314 19:18:48.593696  988436 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 19:18:48.593763  988436 out.go:239] * 
	* 
	W0314 19:18:48.593834  988436 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:18:48.593867  988436 out.go:239] * 
	* 
	W0314 19:18:48.595038  988436 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:18:48.598742  988436 out.go:177] 
	W0314 19:18:48.600336  988436 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:18:48.600393  988436 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 19:18:48.600421  988436 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 19:18:48.602194  988436 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:188: failed starting minikube -first start-. args "out/minikube-linux-amd64 start -p old-k8s-version-968094 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 6 (246.006388ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:18:48.893946  991510 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-968094" does not appear in /home/jenkins/minikube-integration/18384-942544/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-968094" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/FirstStart (294.76s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/Stop (139.23s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p no-preload-731976 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p no-preload-731976 --alsologtostderr -v=3: exit status 82 (2m0.628655074s)

                                                
                                                
-- stdout --
	* Stopping node "no-preload-731976"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 19:16:57.255578  990441 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:16:57.255757  990441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:16:57.255773  990441 out.go:304] Setting ErrFile to fd 2...
	I0314 19:16:57.255781  990441 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:16:57.256133  990441 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:16:57.256558  990441 out.go:298] Setting JSON to false
	I0314 19:16:57.256683  990441 mustload.go:65] Loading cluster: no-preload-731976
	I0314 19:16:57.257229  990441 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:16:57.257338  990441 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/config.json ...
	I0314 19:16:57.257585  990441 mustload.go:65] Loading cluster: no-preload-731976
	I0314 19:16:57.257756  990441 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:16:57.257813  990441 stop.go:39] StopHost: no-preload-731976
	I0314 19:16:57.258613  990441 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:16:57.258696  990441 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:16:57.274670  990441 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34319
	I0314 19:16:57.275249  990441 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:16:57.275880  990441 main.go:141] libmachine: Using API Version  1
	I0314 19:16:57.275897  990441 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:16:57.276300  990441 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:16:57.278691  990441 out.go:177] * Stopping node "no-preload-731976"  ...
	I0314 19:16:57.280011  990441 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0314 19:16:57.280061  990441 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:16:57.280360  990441 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0314 19:16:57.280405  990441 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:16:57.283989  990441 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:16:57.284520  990441 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:16:57.284553  990441 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:16:57.284811  990441 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:16:57.285102  990441 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:16:57.285295  990441 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:16:57.285467  990441 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:16:57.421964  990441 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0314 19:16:57.494230  990441 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0314 19:16:57.576913  990441 main.go:141] libmachine: Stopping "no-preload-731976"...
	I0314 19:16:57.576951  990441 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:16:57.578901  990441 main.go:141] libmachine: (no-preload-731976) Calling .Stop
	I0314 19:16:57.583367  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 0/120
	I0314 19:16:58.585153  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 1/120
	I0314 19:16:59.586671  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 2/120
	I0314 19:17:00.588147  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 3/120
	I0314 19:17:01.589591  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 4/120
	I0314 19:17:02.591509  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 5/120
	I0314 19:17:03.593005  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 6/120
	I0314 19:17:04.594419  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 7/120
	I0314 19:17:05.595981  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 8/120
	I0314 19:17:06.597364  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 9/120
	I0314 19:17:07.599712  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 10/120
	I0314 19:17:08.601111  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 11/120
	I0314 19:17:09.602448  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 12/120
	I0314 19:17:10.604257  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 13/120
	I0314 19:17:11.605495  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 14/120
	I0314 19:17:12.607129  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 15/120
	I0314 19:17:13.608833  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 16/120
	I0314 19:17:14.610385  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 17/120
	I0314 19:17:15.612130  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 18/120
	I0314 19:17:16.613582  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 19/120
	I0314 19:17:17.615621  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 20/120
	I0314 19:17:18.618456  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 21/120
	I0314 19:17:19.620262  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 22/120
	I0314 19:17:20.621528  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 23/120
	I0314 19:17:21.623248  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 24/120
	I0314 19:17:22.625028  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 25/120
	I0314 19:17:23.627119  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 26/120
	I0314 19:17:24.628562  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 27/120
	I0314 19:17:25.629931  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 28/120
	I0314 19:17:26.631424  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 29/120
	I0314 19:17:27.633756  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 30/120
	I0314 19:17:28.635343  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 31/120
	I0314 19:17:29.636718  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 32/120
	I0314 19:17:30.637967  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 33/120
	I0314 19:17:31.639244  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 34/120
	I0314 19:17:32.641070  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 35/120
	I0314 19:17:33.642359  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 36/120
	I0314 19:17:34.644463  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 37/120
	I0314 19:17:35.645752  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 38/120
	I0314 19:17:36.647097  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 39/120
	I0314 19:17:37.649399  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 40/120
	I0314 19:17:38.651027  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 41/120
	I0314 19:17:39.652636  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 42/120
	I0314 19:17:40.654766  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 43/120
	I0314 19:17:41.656037  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 44/120
	I0314 19:17:42.658314  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 45/120
	I0314 19:17:43.660013  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 46/120
	I0314 19:17:44.661578  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 47/120
	I0314 19:17:45.662973  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 48/120
	I0314 19:17:46.664354  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 49/120
	I0314 19:17:47.666620  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 50/120
	I0314 19:17:48.668233  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 51/120
	I0314 19:17:49.670536  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 52/120
	I0314 19:17:50.671976  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 53/120
	I0314 19:17:51.673310  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 54/120
	I0314 19:17:52.675418  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 55/120
	I0314 19:17:53.677196  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 56/120
	I0314 19:17:54.678545  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 57/120
	I0314 19:17:55.680169  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 58/120
	I0314 19:17:56.682288  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 59/120
	I0314 19:17:57.684203  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 60/120
	I0314 19:17:58.685736  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 61/120
	I0314 19:17:59.687438  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 62/120
	I0314 19:18:00.689009  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 63/120
	I0314 19:18:01.690916  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 64/120
	I0314 19:18:02.692586  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 65/120
	I0314 19:18:03.694975  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 66/120
	I0314 19:18:04.696465  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 67/120
	I0314 19:18:05.697955  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 68/120
	I0314 19:18:06.699586  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 69/120
	I0314 19:18:07.701677  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 70/120
	I0314 19:18:08.703799  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 71/120
	I0314 19:18:09.705264  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 72/120
	I0314 19:18:10.706840  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 73/120
	I0314 19:18:11.708438  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 74/120
	I0314 19:18:12.710621  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 75/120
	I0314 19:18:13.712341  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 76/120
	I0314 19:18:14.714728  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 77/120
	I0314 19:18:15.716260  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 78/120
	I0314 19:18:16.717855  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 79/120
	I0314 19:18:17.720340  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 80/120
	I0314 19:18:18.722377  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 81/120
	I0314 19:18:19.724036  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 82/120
	I0314 19:18:20.725847  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 83/120
	I0314 19:18:21.727571  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 84/120
	I0314 19:18:22.729444  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 85/120
	I0314 19:18:23.731583  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 86/120
	I0314 19:18:24.733395  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 87/120
	I0314 19:18:25.735309  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 88/120
	I0314 19:18:26.737331  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 89/120
	I0314 19:18:27.739586  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 90/120
	I0314 19:18:28.741229  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 91/120
	I0314 19:18:29.742792  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 92/120
	I0314 19:18:30.744248  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 93/120
	I0314 19:18:31.746024  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 94/120
	I0314 19:18:32.748015  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 95/120
	I0314 19:18:33.749733  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 96/120
	I0314 19:18:34.751881  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 97/120
	I0314 19:18:35.753579  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 98/120
	I0314 19:18:36.755168  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 99/120
	I0314 19:18:37.757445  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 100/120
	I0314 19:18:38.759164  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 101/120
	I0314 19:18:39.760781  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 102/120
	I0314 19:18:40.762215  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 103/120
	I0314 19:18:41.763586  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 104/120
	I0314 19:18:42.765632  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 105/120
	I0314 19:18:43.767262  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 106/120
	I0314 19:18:44.769005  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 107/120
	I0314 19:18:45.770559  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 108/120
	I0314 19:18:46.771890  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 109/120
	I0314 19:18:47.774389  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 110/120
	I0314 19:18:48.775631  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 111/120
	I0314 19:18:49.777080  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 112/120
	I0314 19:18:50.778554  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 113/120
	I0314 19:18:51.780194  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 114/120
	I0314 19:18:52.782364  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 115/120
	I0314 19:18:53.783903  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 116/120
	I0314 19:18:54.785502  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 117/120
	I0314 19:18:55.787113  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 118/120
	I0314 19:18:56.788822  990441 main.go:141] libmachine: (no-preload-731976) Waiting for machine to stop 119/120
	I0314 19:18:57.789828  990441 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0314 19:18:57.789904  990441 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0314 19:18:57.791719  990441 out.go:177] 
	W0314 19:18:57.793010  990441 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0314 19:18:57.793031  990441 out.go:239] * 
	* 
	W0314 19:18:57.804118  990441 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:18:57.805906  990441 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p no-preload-731976 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-731976 -n no-preload-731976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-731976 -n no-preload-731976: exit status 3 (18.602823943s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:19:16.408538  991664 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	E0314 19:19:16.408560  991664 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-731976" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/Stop (139.23s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/Stop (139.2s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p embed-certs-992669 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p embed-certs-992669 --alsologtostderr -v=3: exit status 82 (2m0.566899242s)

                                                
                                                
-- stdout --
	* Stopping node "embed-certs-992669"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 19:17:21.342566  990640 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:17:21.342731  990640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:17:21.342750  990640 out.go:304] Setting ErrFile to fd 2...
	I0314 19:17:21.342756  990640 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:17:21.343067  990640 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:17:21.343346  990640 out.go:298] Setting JSON to false
	I0314 19:17:21.343418  990640 mustload.go:65] Loading cluster: embed-certs-992669
	I0314 19:17:21.343807  990640 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:17:21.343886  990640 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/config.json ...
	I0314 19:17:21.344067  990640 mustload.go:65] Loading cluster: embed-certs-992669
	I0314 19:17:21.344165  990640 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:17:21.344188  990640 stop.go:39] StopHost: embed-certs-992669
	I0314 19:17:21.344608  990640 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:17:21.344659  990640 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:17:21.359618  990640 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33367
	I0314 19:17:21.360105  990640 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:17:21.360766  990640 main.go:141] libmachine: Using API Version  1
	I0314 19:17:21.360794  990640 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:17:21.361174  990640 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:17:21.363354  990640 out.go:177] * Stopping node "embed-certs-992669"  ...
	I0314 19:17:21.365310  990640 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0314 19:17:21.365355  990640 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:17:21.365592  990640 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0314 19:17:21.365617  990640 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:17:21.368169  990640 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:17:21.368567  990640 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:15:50 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:17:21.368592  990640 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:17:21.368748  990640 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:17:21.368923  990640 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:17:21.369071  990640 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:17:21.369271  990640 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:17:21.488408  990640 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0314 19:17:21.546671  990640 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0314 19:17:21.622110  990640 main.go:141] libmachine: Stopping "embed-certs-992669"...
	I0314 19:17:21.622137  990640 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:17:21.623905  990640 main.go:141] libmachine: (embed-certs-992669) Calling .Stop
	I0314 19:17:21.627393  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 0/120
	I0314 19:17:22.628585  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 1/120
	I0314 19:17:23.630591  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 2/120
	I0314 19:17:24.632124  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 3/120
	I0314 19:17:25.633354  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 4/120
	I0314 19:17:26.635277  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 5/120
	I0314 19:17:27.636616  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 6/120
	I0314 19:17:28.638318  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 7/120
	I0314 19:17:29.639316  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 8/120
	I0314 19:17:30.641298  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 9/120
	I0314 19:17:31.643035  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 10/120
	I0314 19:17:32.644080  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 11/120
	I0314 19:17:33.645503  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 12/120
	I0314 19:17:34.646478  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 13/120
	I0314 19:17:35.647589  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 14/120
	I0314 19:17:36.649586  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 15/120
	I0314 19:17:37.650933  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 16/120
	I0314 19:17:38.652842  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 17/120
	I0314 19:17:39.654621  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 18/120
	I0314 19:17:40.655623  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 19/120
	I0314 19:17:41.657494  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 20/120
	I0314 19:17:42.658939  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 21/120
	I0314 19:17:43.660976  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 22/120
	I0314 19:17:44.662905  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 23/120
	I0314 19:17:45.664104  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 24/120
	I0314 19:17:46.665677  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 25/120
	I0314 19:17:47.667271  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 26/120
	I0314 19:17:48.668597  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 27/120
	I0314 19:17:49.670751  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 28/120
	I0314 19:17:50.672514  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 29/120
	I0314 19:17:51.674384  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 30/120
	I0314 19:17:52.675755  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 31/120
	I0314 19:17:53.678036  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 32/120
	I0314 19:17:54.679229  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 33/120
	I0314 19:17:55.680633  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 34/120
	I0314 19:17:56.682886  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 35/120
	I0314 19:17:57.684359  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 36/120
	I0314 19:17:58.685907  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 37/120
	I0314 19:17:59.687588  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 38/120
	I0314 19:18:00.689009  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 39/120
	I0314 19:18:01.691028  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 40/120
	I0314 19:18:02.692480  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 41/120
	I0314 19:18:03.694306  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 42/120
	I0314 19:18:04.696079  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 43/120
	I0314 19:18:05.697801  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 44/120
	I0314 19:18:06.699956  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 45/120
	I0314 19:18:07.701525  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 46/120
	I0314 19:18:08.703255  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 47/120
	I0314 19:18:09.704848  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 48/120
	I0314 19:18:10.706605  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 49/120
	I0314 19:18:11.708772  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 50/120
	I0314 19:18:12.710476  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 51/120
	I0314 19:18:13.712063  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 52/120
	I0314 19:18:14.714036  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 53/120
	I0314 19:18:15.715905  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 54/120
	I0314 19:18:16.717954  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 55/120
	I0314 19:18:17.719845  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 56/120
	I0314 19:18:18.721602  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 57/120
	I0314 19:18:19.723388  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 58/120
	I0314 19:18:20.725104  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 59/120
	I0314 19:18:21.727308  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 60/120
	I0314 19:18:22.729082  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 61/120
	I0314 19:18:23.731007  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 62/120
	I0314 19:18:24.733162  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 63/120
	I0314 19:18:25.734760  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 64/120
	I0314 19:18:26.737184  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 65/120
	I0314 19:18:27.739123  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 66/120
	I0314 19:18:28.741355  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 67/120
	I0314 19:18:29.743094  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 68/120
	I0314 19:18:30.744415  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 69/120
	I0314 19:18:31.746246  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 70/120
	I0314 19:18:32.748100  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 71/120
	I0314 19:18:33.749900  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 72/120
	I0314 19:18:34.752546  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 73/120
	I0314 19:18:35.754549  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 74/120
	I0314 19:18:36.756179  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 75/120
	I0314 19:18:37.758170  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 76/120
	I0314 19:18:38.759748  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 77/120
	I0314 19:18:39.761080  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 78/120
	I0314 19:18:40.762543  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 79/120
	I0314 19:18:41.764305  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 80/120
	I0314 19:18:42.765796  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 81/120
	I0314 19:18:43.768007  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 82/120
	I0314 19:18:44.769612  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 83/120
	I0314 19:18:45.771172  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 84/120
	I0314 19:18:46.772897  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 85/120
	I0314 19:18:47.774675  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 86/120
	I0314 19:18:48.775880  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 87/120
	I0314 19:18:49.777314  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 88/120
	I0314 19:18:50.778796  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 89/120
	I0314 19:18:51.780779  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 90/120
	I0314 19:18:52.782520  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 91/120
	I0314 19:18:53.784083  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 92/120
	I0314 19:18:54.786206  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 93/120
	I0314 19:18:55.787449  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 94/120
	I0314 19:18:56.789454  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 95/120
	I0314 19:18:57.791589  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 96/120
	I0314 19:18:58.792988  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 97/120
	I0314 19:18:59.794565  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 98/120
	I0314 19:19:00.795913  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 99/120
	I0314 19:19:01.797935  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 100/120
	I0314 19:19:02.799553  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 101/120
	I0314 19:19:03.800940  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 102/120
	I0314 19:19:04.802427  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 103/120
	I0314 19:19:05.803913  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 104/120
	I0314 19:19:06.806038  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 105/120
	I0314 19:19:07.807414  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 106/120
	I0314 19:19:08.808690  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 107/120
	I0314 19:19:09.809993  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 108/120
	I0314 19:19:10.811341  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 109/120
	I0314 19:19:11.813487  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 110/120
	I0314 19:19:12.814770  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 111/120
	I0314 19:19:13.816144  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 112/120
	I0314 19:19:14.817466  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 113/120
	I0314 19:19:15.818981  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 114/120
	I0314 19:19:16.821110  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 115/120
	I0314 19:19:17.822453  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 116/120
	I0314 19:19:18.823871  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 117/120
	I0314 19:19:19.825318  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 118/120
	I0314 19:19:20.826747  990640 main.go:141] libmachine: (embed-certs-992669) Waiting for machine to stop 119/120
	I0314 19:19:21.827594  990640 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0314 19:19:21.827668  990640 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0314 19:19:21.829573  990640 out.go:177] 
	W0314 19:19:21.831027  990640 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0314 19:19:21.831050  990640 out.go:239] * 
	* 
	W0314 19:19:21.842080  990640 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_2.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:19:21.843671  990640 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p embed-certs-992669 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992669 -n embed-certs-992669
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992669 -n embed-certs-992669: exit status 3 (18.627545331s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:19:40.472578  991810 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	E0314 19:19:40.472600  991810 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-992669" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/Stop (139.20s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p default-k8s-diff-port-440341 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Non-zero exit: out/minikube-linux-amd64 stop -p default-k8s-diff-port-440341 --alsologtostderr -v=3: exit status 82 (2m0.551861761s)

                                                
                                                
-- stdout --
	* Stopping node "default-k8s-diff-port-440341"  ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 19:18:35.097814  991457 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:18:35.098099  991457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:18:35.098108  991457 out.go:304] Setting ErrFile to fd 2...
	I0314 19:18:35.098113  991457 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:18:35.098314  991457 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:18:35.098546  991457 out.go:298] Setting JSON to false
	I0314 19:18:35.098618  991457 mustload.go:65] Loading cluster: default-k8s-diff-port-440341
	I0314 19:18:35.098950  991457 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:18:35.099008  991457 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:18:35.099166  991457 mustload.go:65] Loading cluster: default-k8s-diff-port-440341
	I0314 19:18:35.099262  991457 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:18:35.099286  991457 stop.go:39] StopHost: default-k8s-diff-port-440341
	I0314 19:18:35.099645  991457 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:18:35.099692  991457 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:18:35.115439  991457 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37895
	I0314 19:18:35.115945  991457 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:18:35.116573  991457 main.go:141] libmachine: Using API Version  1
	I0314 19:18:35.116604  991457 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:18:35.116973  991457 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:18:35.119322  991457 out.go:177] * Stopping node "default-k8s-diff-port-440341"  ...
	I0314 19:18:35.120718  991457 machine.go:157] backing up vm config to /var/lib/minikube/backup: [/etc/cni /etc/kubernetes]
	I0314 19:18:35.120742  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:18:35.120983  991457 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/backup
	I0314 19:18:35.121026  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:18:35.123761  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:18:35.124199  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:18:35.124252  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:18:35.124410  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:18:35.124574  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:18:35.124744  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:18:35.124893  991457 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:18:35.234550  991457 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/cni /var/lib/minikube/backup
	I0314 19:18:35.312634  991457 ssh_runner.go:195] Run: sudo rsync --archive --relative /etc/kubernetes /var/lib/minikube/backup
	I0314 19:18:35.373868  991457 main.go:141] libmachine: Stopping "default-k8s-diff-port-440341"...
	I0314 19:18:35.373898  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:18:35.375522  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Stop
	I0314 19:18:35.378859  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 0/120
	I0314 19:18:36.380108  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 1/120
	I0314 19:18:37.381590  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 2/120
	I0314 19:18:38.383044  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 3/120
	I0314 19:18:39.384430  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 4/120
	I0314 19:18:40.386680  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 5/120
	I0314 19:18:41.388016  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 6/120
	I0314 19:18:42.389266  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 7/120
	I0314 19:18:43.390581  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 8/120
	I0314 19:18:44.391920  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 9/120
	I0314 19:18:45.394237  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 10/120
	I0314 19:18:46.395640  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 11/120
	I0314 19:18:47.397046  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 12/120
	I0314 19:18:48.398990  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 13/120
	I0314 19:18:49.400499  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 14/120
	I0314 19:18:50.402443  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 15/120
	I0314 19:18:51.403933  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 16/120
	I0314 19:18:52.405386  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 17/120
	I0314 19:18:53.407071  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 18/120
	I0314 19:18:54.408674  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 19/120
	I0314 19:18:55.410758  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 20/120
	I0314 19:18:56.412028  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 21/120
	I0314 19:18:57.413456  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 22/120
	I0314 19:18:58.415108  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 23/120
	I0314 19:18:59.416552  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 24/120
	I0314 19:19:00.418650  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 25/120
	I0314 19:19:01.419986  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 26/120
	I0314 19:19:02.421294  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 27/120
	I0314 19:19:03.422549  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 28/120
	I0314 19:19:04.424034  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 29/120
	I0314 19:19:05.426461  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 30/120
	I0314 19:19:06.427756  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 31/120
	I0314 19:19:07.429162  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 32/120
	I0314 19:19:08.430476  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 33/120
	I0314 19:19:09.431919  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 34/120
	I0314 19:19:10.434181  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 35/120
	I0314 19:19:11.435521  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 36/120
	I0314 19:19:12.436934  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 37/120
	I0314 19:19:13.438255  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 38/120
	I0314 19:19:14.439749  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 39/120
	I0314 19:19:15.441974  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 40/120
	I0314 19:19:16.443341  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 41/120
	I0314 19:19:17.444871  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 42/120
	I0314 19:19:18.446273  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 43/120
	I0314 19:19:19.447759  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 44/120
	I0314 19:19:20.449770  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 45/120
	I0314 19:19:21.451375  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 46/120
	I0314 19:19:22.452717  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 47/120
	I0314 19:19:23.454349  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 48/120
	I0314 19:19:24.455753  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 49/120
	I0314 19:19:25.458088  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 50/120
	I0314 19:19:26.459344  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 51/120
	I0314 19:19:27.460885  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 52/120
	I0314 19:19:28.462476  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 53/120
	I0314 19:19:29.464128  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 54/120
	I0314 19:19:30.466290  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 55/120
	I0314 19:19:31.467646  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 56/120
	I0314 19:19:32.469098  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 57/120
	I0314 19:19:33.470719  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 58/120
	I0314 19:19:34.472293  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 59/120
	I0314 19:19:35.474686  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 60/120
	I0314 19:19:36.476144  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 61/120
	I0314 19:19:37.477755  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 62/120
	I0314 19:19:38.479280  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 63/120
	I0314 19:19:39.480862  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 64/120
	I0314 19:19:40.482674  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 65/120
	I0314 19:19:41.484166  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 66/120
	I0314 19:19:42.485678  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 67/120
	I0314 19:19:43.487123  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 68/120
	I0314 19:19:44.488597  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 69/120
	I0314 19:19:45.491036  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 70/120
	I0314 19:19:46.492419  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 71/120
	I0314 19:19:47.494027  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 72/120
	I0314 19:19:48.495497  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 73/120
	I0314 19:19:49.497110  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 74/120
	I0314 19:19:50.499236  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 75/120
	I0314 19:19:51.501215  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 76/120
	I0314 19:19:52.502650  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 77/120
	I0314 19:19:53.504204  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 78/120
	I0314 19:19:54.505587  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 79/120
	I0314 19:19:55.507900  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 80/120
	I0314 19:19:56.509442  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 81/120
	I0314 19:19:57.510711  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 82/120
	I0314 19:19:58.512234  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 83/120
	I0314 19:19:59.513704  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 84/120
	I0314 19:20:00.516008  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 85/120
	I0314 19:20:01.517412  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 86/120
	I0314 19:20:02.518961  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 87/120
	I0314 19:20:03.520388  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 88/120
	I0314 19:20:04.521842  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 89/120
	I0314 19:20:05.524055  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 90/120
	I0314 19:20:06.525364  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 91/120
	I0314 19:20:07.526697  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 92/120
	I0314 19:20:08.528038  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 93/120
	I0314 19:20:09.529416  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 94/120
	I0314 19:20:10.531613  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 95/120
	I0314 19:20:11.532923  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 96/120
	I0314 19:20:12.534257  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 97/120
	I0314 19:20:13.535513  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 98/120
	I0314 19:20:14.536942  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 99/120
	I0314 19:20:15.538473  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 100/120
	I0314 19:20:16.539901  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 101/120
	I0314 19:20:17.541650  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 102/120
	I0314 19:20:18.543551  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 103/120
	I0314 19:20:19.544899  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 104/120
	I0314 19:20:20.547306  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 105/120
	I0314 19:20:21.549189  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 106/120
	I0314 19:20:22.550668  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 107/120
	I0314 19:20:23.552260  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 108/120
	I0314 19:20:24.553797  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 109/120
	I0314 19:20:25.556438  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 110/120
	I0314 19:20:26.558922  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 111/120
	I0314 19:20:27.560409  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 112/120
	I0314 19:20:28.561965  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 113/120
	I0314 19:20:29.563398  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 114/120
	I0314 19:20:30.565446  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 115/120
	I0314 19:20:31.566835  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 116/120
	I0314 19:20:32.568687  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 117/120
	I0314 19:20:33.570711  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 118/120
	I0314 19:20:34.572402  991457 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for machine to stop 119/120
	I0314 19:20:35.573754  991457 stop.go:66] stop err: unable to stop vm, current state "Running"
	W0314 19:20:35.573818  991457 stop.go:165] stop host returned error: Temporary Error: stop: unable to stop vm, current state "Running"
	I0314 19:20:35.575831  991457 out.go:177] 
	W0314 19:20:35.577319  991457 out.go:239] X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	X Exiting due to GUEST_STOP_TIMEOUT: Unable to stop VM: Temporary Error: stop: unable to stop vm, current state "Running"
	W0314 19:20:35.577339  991457 out.go:239] * 
	* 
	W0314 19:20:35.588398  991457 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_stop_24ef9d461bcadf056806ccb9bba8d5f9f54754a6_3.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:20:35.589853  991457 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:230: failed stopping minikube - first stop-. args "out/minikube-linux-amd64 stop -p default-k8s-diff-port-440341 --alsologtostderr -v=3" : exit status 82
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341: exit status 3 (18.609328282s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:20:54.200532  992379 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.88:22: connect: no route to host
	E0314 19:20:54.200557  992379 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.88:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-440341" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/Stop (139.16s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context old-k8s-version-968094 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) Non-zero exit: kubectl --context old-k8s-version-968094 create -f testdata/busybox.yaml: exit status 1 (43.99394ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-968094" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:196: kubectl --context old-k8s-version-968094 create -f testdata/busybox.yaml failed: exit status 1
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 6 (238.473176ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:18:49.177550  991550 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-968094" does not appear in /home/jenkins/minikube-integration/18384-942544/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-968094" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 6 (251.277402ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:18:49.426866  991580 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-968094" does not appear in /home/jenkins/minikube-integration/18384-942544/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-968094" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/DeployApp (0.53s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.7s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-968094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-968094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: exit status 10 (1m39.417735966s)

                                                
                                                
-- stdout --
	* metrics-server is an addon maintained by Kubernetes. For any concerns contact minikube on GitHub.
	You can view the list of minikube maintainers at: https://github.com/kubernetes/minikube/blob/master/OWNERS
	  - Using image fake.domain/registry.k8s.io/echoserver:1.4
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE: enable failed: run callbacks: running callbacks: [sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.20.0/kubectl apply --force -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	]
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_2bafae6fa40fec163538f94366e390b0317a8b15_4.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:207: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable metrics-server -p old-k8s-version-968094 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain": exit status 10
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context old-k8s-version-968094 describe deploy/metrics-server -n kube-system
start_stop_delete_test.go:215: (dbg) Non-zero exit: kubectl --context old-k8s-version-968094 describe deploy/metrics-server -n kube-system: exit status 1 (45.409393ms)

                                                
                                                
** stderr ** 
	error: context "old-k8s-version-968094" does not exist

                                                
                                                
** /stderr **
start_stop_delete_test.go:217: failed to get info on auto-pause deployments. args "kubectl --context old-k8s-version-968094 describe deploy/metrics-server -n kube-system": exit status 1
start_stop_delete_test.go:221: addon did not load correct image. Expected to contain " fake.domain/registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 6 (237.333801ms)

                                                
                                                
-- stdout --
	Running
	WARNING: Your kubectl is pointing to stale minikube-vm.
	To fix the kubectl context, run `minikube update-context`

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:20:29.129645  992220 status.go:417] kubeconfig endpoint: get endpoint: "old-k8s-version-968094" does not appear in /home/jenkins/minikube-integration/18384-942544/kubeconfig

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 6 (may be ok)
helpers_test.go:241: "old-k8s-version-968094" host is not running, skipping log retrieval (state="Running\nWARNING: Your kubectl is pointing to stale minikube-vm.\nTo fix the kubectl context, run `minikube update-context`")
--- FAIL: TestStartStop/group/old-k8s-version/serial/EnableAddonWhileActive (99.70s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.41s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-731976 -n no-preload-731976
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-731976 -n no-preload-731976: exit status 3 (3.197881844s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:19:19.608587  991750 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	E0314 19:19:19.608607  991750 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p no-preload-731976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p no-preload-731976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.161957672s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p no-preload-731976 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-731976 -n no-preload-731976
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-731976 -n no-preload-731976: exit status 3 (3.053920661s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:19:28.824586  991839 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host
	E0314 19:19:28.824610  991839 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.39.148:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "no-preload-731976" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/no-preload/serial/EnableAddonAfterStop (12.41s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992669 -n embed-certs-992669
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992669 -n embed-certs-992669: exit status 3 (3.199845042s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:19:43.672598  991944 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	E0314 19:19:43.672624  991944 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p embed-certs-992669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p embed-certs-992669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.16161549s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p embed-certs-992669 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992669 -n embed-certs-992669
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992669 -n embed-certs-992669: exit status 3 (3.054408972s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:19:52.888658  992016 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host
	E0314 19:19:52.888682  992016 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.50.213:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "embed-certs-992669" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/embed-certs/serial/EnableAddonAfterStop (12.42s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/SecondStart (750.57s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p old-k8s-version-968094 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0
start_stop_delete_test.go:256: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p old-k8s-version-968094 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0: exit status 109 (12m27.122376492s)

                                                
                                                
-- stdout --
	* [old-k8s-version-968094] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	* Using the kvm2 driver based on existing profile
	* Starting "old-k8s-version-968094" primary control-plane node in "old-k8s-version-968094" cluster
	* Restarting existing kvm2 VM for "old-k8s-version-968094" ...
	* Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	  - Generating certificates and keys ...
	  - Booting up control plane ...
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 19:20:33.712573  992344 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:20:33.712843  992344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:20:33.712854  992344 out.go:304] Setting ErrFile to fd 2...
	I0314 19:20:33.712858  992344 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:20:33.713092  992344 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:20:33.713713  992344 out.go:298] Setting JSON to false
	I0314 19:20:33.714619  992344 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":97386,"bootTime":1710346648,"procs":203,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:20:33.714691  992344 start.go:139] virtualization: kvm guest
	I0314 19:20:33.716952  992344 out.go:177] * [old-k8s-version-968094] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:20:33.718871  992344 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:20:33.718880  992344 notify.go:220] Checking for updates...
	I0314 19:20:33.720375  992344 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:20:33.721807  992344 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:20:33.723126  992344 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:20:33.724493  992344 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:20:33.725761  992344 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:20:33.727289  992344 config.go:182] Loaded profile config "old-k8s-version-968094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:20:33.727728  992344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:20:33.727775  992344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:20:33.742788  992344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40109
	I0314 19:20:33.743263  992344 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:20:33.743816  992344 main.go:141] libmachine: Using API Version  1
	I0314 19:20:33.743857  992344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:20:33.744337  992344 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:20:33.744603  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:20:33.746529  992344 out.go:177] * Kubernetes 1.28.4 is now available. If you would like to upgrade, specify: --kubernetes-version=v1.28.4
	I0314 19:20:33.747843  992344 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:20:33.748128  992344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:20:33.748166  992344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:20:33.763097  992344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46425
	I0314 19:20:33.763475  992344 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:20:33.763920  992344 main.go:141] libmachine: Using API Version  1
	I0314 19:20:33.763941  992344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:20:33.764266  992344 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:20:33.764416  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:20:33.798338  992344 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 19:20:33.799677  992344 start.go:297] selected driver: kvm2
	I0314 19:20:33.799689  992344 start.go:901] validating driver "kvm2" against &{Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:2628
0h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:20:33.799804  992344 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:20:33.800502  992344 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:20:33.800561  992344 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:20:33.814552  992344 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:20:33.814887  992344 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:20:33.814917  992344 cni.go:84] Creating CNI manager for ""
	I0314 19:20:33.814924  992344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:20:33.814965  992344 start.go:340] cluster config:
	{Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default
APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2
000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:20:33.815060  992344 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:20:33.816830  992344 out.go:177] * Starting "old-k8s-version-968094" primary control-plane node in "old-k8s-version-968094" cluster
	I0314 19:20:33.818138  992344 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:20:33.818172  992344 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 19:20:33.818182  992344 cache.go:56] Caching tarball of preloaded images
	I0314 19:20:33.818253  992344 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:20:33.818264  992344 cache.go:59] Finished verifying existence of preloaded tar for v1.20.0 on crio
	I0314 19:20:33.818381  992344 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json ...
	I0314 19:20:33.818549  992344 start.go:360] acquireMachinesLock for old-k8s-version-968094: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:24:27.129245  992344 start.go:364] duration metric: took 3m53.310661355s to acquireMachinesLock for "old-k8s-version-968094"
	I0314 19:24:27.129312  992344 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:27.129324  992344 fix.go:54] fixHost starting: 
	I0314 19:24:27.129726  992344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:27.129761  992344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:27.150444  992344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0314 19:24:27.150921  992344 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:27.151429  992344 main.go:141] libmachine: Using API Version  1
	I0314 19:24:27.151453  992344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:27.151859  992344 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:27.152058  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:27.152265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetState
	I0314 19:24:27.153847  992344 fix.go:112] recreateIfNeeded on old-k8s-version-968094: state=Stopped err=<nil>
	I0314 19:24:27.153876  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	W0314 19:24:27.154051  992344 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:27.156243  992344 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-968094" ...
	I0314 19:24:27.157735  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .Start
	I0314 19:24:27.157923  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring networks are active...
	I0314 19:24:27.158602  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network default is active
	I0314 19:24:27.158940  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network mk-old-k8s-version-968094 is active
	I0314 19:24:27.159464  992344 main.go:141] libmachine: (old-k8s-version-968094) Getting domain xml...
	I0314 19:24:27.160230  992344 main.go:141] libmachine: (old-k8s-version-968094) Creating domain...
	I0314 19:24:28.397890  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting to get IP...
	I0314 19:24:28.398964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.399389  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.399455  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.399366  993151 retry.go:31] will retry after 254.808358ms: waiting for machine to come up
	I0314 19:24:28.655922  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.656383  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.656414  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.656329  993151 retry.go:31] will retry after 305.278558ms: waiting for machine to come up
	I0314 19:24:28.963796  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.964329  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.964360  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.964283  993151 retry.go:31] will retry after 405.241077ms: waiting for machine to come up
	I0314 19:24:29.371107  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.371677  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.371724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.371634  993151 retry.go:31] will retry after 392.618577ms: waiting for machine to come up
	I0314 19:24:29.766406  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.766893  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.766916  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.766848  993151 retry.go:31] will retry after 540.221203ms: waiting for machine to come up
	I0314 19:24:30.308703  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:30.309134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:30.309165  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:30.309075  993151 retry.go:31] will retry after 919.467685ms: waiting for machine to come up
	I0314 19:24:31.230536  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:31.231022  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:31.231055  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:31.230955  993151 retry.go:31] will retry after 1.096403831s: waiting for machine to come up
	I0314 19:24:32.329625  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:32.330123  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:32.330150  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:32.330079  993151 retry.go:31] will retry after 959.221478ms: waiting for machine to come up
	I0314 19:24:33.291448  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:33.291863  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:33.291896  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:33.291811  993151 retry.go:31] will retry after 1.719262878s: waiting for machine to come up
	I0314 19:24:35.013374  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:35.013750  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:35.013781  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:35.013702  993151 retry.go:31] will retry after 1.413824554s: waiting for machine to come up
	I0314 19:24:36.429118  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:36.429704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:36.429738  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:36.429643  993151 retry.go:31] will retry after 2.349477476s: waiting for machine to come up
	I0314 19:24:38.781555  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:38.782105  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:38.782134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:38.782060  993151 retry.go:31] will retry after 3.062702235s: waiting for machine to come up
	I0314 19:24:41.846373  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:41.846889  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:41.846928  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:41.846822  993151 retry.go:31] will retry after 3.245094913s: waiting for machine to come up
	I0314 19:24:45.093425  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:45.093821  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:45.093848  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:45.093766  993151 retry.go:31] will retry after 4.695140566s: waiting for machine to come up
	I0314 19:24:49.791977  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792478  992344 main.go:141] libmachine: (old-k8s-version-968094) Found IP for machine: 192.168.72.211
	I0314 19:24:49.792509  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has current primary IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792519  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserving static IP address...
	I0314 19:24:49.792964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.792995  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserved static IP address: 192.168.72.211
	I0314 19:24:49.793028  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | skip adding static IP to network mk-old-k8s-version-968094 - found existing host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"}
	I0314 19:24:49.793049  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Getting to WaitForSSH function...
	I0314 19:24:49.793060  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting for SSH to be available...
	I0314 19:24:49.795809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796119  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.796155  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796340  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH client type: external
	I0314 19:24:49.796365  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa (-rw-------)
	I0314 19:24:49.796399  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:49.796418  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | About to run SSH command:
	I0314 19:24:49.796437  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | exit 0
	I0314 19:24:49.928364  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:49.928849  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:24:49.929565  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:49.932065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932543  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.932575  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932818  992344 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json ...
	I0314 19:24:49.933027  992344 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:49.933049  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:49.933280  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:49.935870  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936260  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.936292  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936447  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:49.936649  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936821  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936940  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:49.937112  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:49.937318  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:49.937331  992344 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:50.053144  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:50.053184  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053461  992344 buildroot.go:166] provisioning hostname "old-k8s-version-968094"
	I0314 19:24:50.053495  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053715  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.056663  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057034  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.057061  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.057486  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057647  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057775  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.057990  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.058167  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.058181  992344 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-968094 && echo "old-k8s-version-968094" | sudo tee /etc/hostname
	I0314 19:24:50.190002  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-968094
	
	I0314 19:24:50.190030  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.192892  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193306  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.193343  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.193825  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194002  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194128  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.194298  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.194472  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.194493  992344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-968094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-968094/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-968094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:50.322939  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:50.322975  992344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:50.323003  992344 buildroot.go:174] setting up certificates
	I0314 19:24:50.323016  992344 provision.go:84] configureAuth start
	I0314 19:24:50.323026  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.323344  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:50.326376  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.326798  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.326827  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.327082  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.329704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.329994  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.330026  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.330131  992344 provision.go:143] copyHostCerts
	I0314 19:24:50.330206  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:50.330223  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:50.330299  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:50.330426  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:50.330435  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:50.330472  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:50.330549  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:50.330560  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:50.330584  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:50.330649  992344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-968094 san=[127.0.0.1 192.168.72.211 localhost minikube old-k8s-version-968094]
	I0314 19:24:50.471374  992344 provision.go:177] copyRemoteCerts
	I0314 19:24:50.471438  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:50.471469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.474223  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474570  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.474608  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474773  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.474969  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.475149  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.475261  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:50.563859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:24:50.593259  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:50.624146  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 19:24:50.651113  992344 provision.go:87] duration metric: took 328.081801ms to configureAuth
	I0314 19:24:50.651158  992344 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:50.651348  992344 config.go:182] Loaded profile config "old-k8s-version-968094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:24:50.651445  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.654716  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.655096  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655328  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.655552  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655730  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655870  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.656012  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.656191  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.656223  992344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %s "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:50.925456  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:50.925492  992344 machine.go:97] duration metric: took 992.449429ms to provisionDockerMachine
	I0314 19:24:50.925508  992344 start.go:293] postStartSetup for "old-k8s-version-968094" (driver="kvm2")
	I0314 19:24:50.925518  992344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:50.925535  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:50.925909  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:50.925957  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.928724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929100  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.929124  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929292  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.929469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.929606  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.929718  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.020664  992344 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:51.025418  992344 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:51.025449  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:51.025530  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:51.025642  992344 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:51.025732  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:51.036808  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:51.062597  992344 start.go:296] duration metric: took 137.076655ms for postStartSetup
	I0314 19:24:51.062641  992344 fix.go:56] duration metric: took 23.933315476s for fixHost
	I0314 19:24:51.062667  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.065408  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.065766  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.065809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.066008  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.066241  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066426  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.066751  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:51.066923  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:51.066934  992344 main.go:141] libmachine: About to run SSH command:
	date +%s.%N
	I0314 19:24:51.181564  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444291.127685902
	
	I0314 19:24:51.181593  992344 fix.go:216] guest clock: 1710444291.127685902
	I0314 19:24:51.181604  992344 fix.go:229] Guest: 2024-03-14 19:24:51.127685902 +0000 UTC Remote: 2024-03-14 19:24:51.062645814 +0000 UTC m=+257.398231189 (delta=65.040088ms)
	I0314 19:24:51.181630  992344 fix.go:200] guest clock delta is within tolerance: 65.040088ms
	I0314 19:24:51.181636  992344 start.go:83] releasing machines lock for "old-k8s-version-968094", held for 24.052354261s
	I0314 19:24:51.181662  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.181979  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:51.185086  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185444  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.185482  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185683  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186150  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186369  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186475  992344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:51.186530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.186600  992344 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:51.186628  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.189328  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189665  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189739  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.189769  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189909  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190069  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.190091  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.190096  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190278  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190372  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190419  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.190530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190693  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190870  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.273691  992344 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:51.304581  992344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:51.462596  992344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:51.469505  992344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:51.469580  992344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%p, " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:51.488042  992344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:51.488064  992344 start.go:494] detecting cgroup driver to use...
	I0314 19:24:51.488127  992344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:51.506331  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:51.521263  992344 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:51.521310  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:51.535346  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:51.554784  992344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:51.695072  992344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:51.861752  992344 docker.go:233] disabling docker service ...
	I0314 19:24:51.861822  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:51.886279  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:51.908899  992344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:52.059911  992344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:52.216861  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:52.236554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %s "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:52.262549  992344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 19:24:52.262629  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.277311  992344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:52.277405  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.292485  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.307327  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.323517  992344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:52.337431  992344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:52.350647  992344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:52.350744  992344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:52.371679  992344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:52.384810  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:52.540285  992344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:52.710717  992344 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:52.710812  992344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:52.716025  992344 start.go:562] Will wait 60s for crictl version
	I0314 19:24:52.716079  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:52.720670  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:52.760376  992344 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:52.760453  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.795912  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.829365  992344 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 19:24:52.830745  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:52.834322  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.834813  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:52.834846  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.835148  992344 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:52.840664  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:52.855935  992344 kubeadm.go:877] updating cluster {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:52.856085  992344 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:24:52.856143  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:52.917316  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:52.917384  992344 ssh_runner.go:195] Run: which lz4
	I0314 19:24:52.923732  992344 ssh_runner.go:195] Run: stat -c "%s %y" /preloaded.tar.lz4
	I0314 19:24:52.929018  992344 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%s %y" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:52.929045  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 19:24:55.007168  992344 crio.go:444] duration metric: took 2.08347164s to copy over tarball
	I0314 19:24:55.007258  992344 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:58.484792  992344 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.477465904s)
	I0314 19:24:58.484844  992344 crio.go:451] duration metric: took 3.47764437s to extract the tarball
	I0314 19:24:58.484855  992344 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:58.531628  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:58.586436  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:58.586467  992344 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.586644  992344 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.586686  992344 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.586732  992344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.586795  992344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588701  992344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.588708  992344 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 19:24:58.588712  992344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.588743  992344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.588773  992344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.588717  992344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.745061  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.748854  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.755753  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.757595  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.776672  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.785641  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.803868  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 19:24:58.878866  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:59.049142  992344 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 19:24:59.049192  992344 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 19:24:59.049238  992344 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 19:24:59.049245  992344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 19:24:59.049206  992344 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.049275  992344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.049297  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058360  992344 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 19:24:59.058394  992344 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.058429  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058471  992344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 19:24:59.058508  992344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.058530  992344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 19:24:59.058550  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058560  992344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.058580  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058506  992344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 19:24:59.058620  992344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.058668  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.179879  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 19:24:59.179903  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.179964  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.180018  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.180048  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.180057  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.180158  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.353654  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 19:24:59.353726  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 19:24:59.353834  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 19:24:59.353886  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 19:24:59.353951  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 19:24:59.353992  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 19:24:59.356778  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 19:24:59.356828  992344 cache_images.go:92] duration metric: took 770.342451ms to LoadCachedImages
	W0314 19:24:59.356913  992344 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0314 19:24:59.356940  992344 kubeadm.go:928] updating node { 192.168.72.211 8443 v1.20.0 crio true true} ...
	I0314 19:24:59.357079  992344 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-968094 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:59.357158  992344 ssh_runner.go:195] Run: crio config
	I0314 19:24:59.412340  992344 cni.go:84] Creating CNI manager for ""
	I0314 19:24:59.412369  992344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:59.412383  992344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:59.412401  992344 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-968094 NodeName:old-k8s-version-968094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 19:24:59.412538  992344 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-968094"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%"
	  nodefs.inodesFree: "0%"
	  imagefs.available: "0%"
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:59.412599  992344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 19:24:59.424508  992344 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:59.424568  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:59.435744  992344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0314 19:24:59.456291  992344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:59.476542  992344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0314 19:24:59.496114  992344 ssh_runner.go:195] Run: grep 192.168.72.211	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:59.500824  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:59.515178  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:59.658035  992344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:59.677735  992344 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094 for IP: 192.168.72.211
	I0314 19:24:59.677764  992344 certs.go:194] generating shared ca certs ...
	I0314 19:24:59.677788  992344 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:59.677986  992344 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:59.678055  992344 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:59.678073  992344 certs.go:256] generating profile certs ...
	I0314 19:24:59.678209  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key
	I0314 19:24:59.678288  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff
	I0314 19:24:59.678358  992344 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key
	I0314 19:24:59.678538  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:59.678589  992344 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:59.678602  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:59.678684  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:59.678751  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:59.678787  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:59.678858  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:59.679859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:59.720965  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:59.758643  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:59.791205  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:59.832034  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 19:24:59.864634  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:24:59.912167  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:59.941168  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:59.969896  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:59.998999  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:00.029688  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:00.062406  992344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:00.083876  992344 ssh_runner.go:195] Run: openssl version
	I0314 19:25:00.091083  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:00.104196  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110057  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110152  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.117863  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:00.130915  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:00.144184  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149849  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149905  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.156267  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:00.168884  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:00.181228  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186741  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186815  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.193408  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:00.206565  992344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:00.211955  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:00.218803  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:00.226004  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:00.233071  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:00.239998  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:00.246935  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:00.253650  992344 kubeadm.go:391] StartCluster: {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:00.253770  992344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:00.253810  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.296620  992344 cri.go:89] found id: ""
	I0314 19:25:00.296698  992344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:00.308438  992344 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:00.308468  992344 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:00.308474  992344 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:00.308525  992344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:00.319200  992344 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:00.320258  992344 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-968094" does not appear in /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:25:00.320949  992344 kubeconfig.go:62] /home/jenkins/minikube-integration/18384-942544/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-968094" cluster setting kubeconfig missing "old-k8s-version-968094" context setting]
	I0314 19:25:00.321954  992344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:00.323826  992344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:00.334959  992344 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.211
	I0314 19:25:00.334999  992344 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:00.335015  992344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:00.335094  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.382418  992344 cri.go:89] found id: ""
	I0314 19:25:00.382504  992344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:00.400714  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:00.411916  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:00.411941  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:00.412000  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:00.421737  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:00.421786  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:00.431760  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:00.441154  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:00.441196  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:00.450820  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.460234  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:00.460286  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.470870  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:00.480352  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:00.480410  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:00.490282  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:00.500774  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:00.627719  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.640607  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.012840431s)
	I0314 19:25:01.640641  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.916817  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.028420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.119081  992344 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:02.119190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.619675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.119328  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.620344  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:04.120088  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:04.619514  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.119530  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.119991  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.619382  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.119301  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.620072  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.119582  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.619828  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.119659  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.619483  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.119624  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.619745  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.120056  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.619647  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.120231  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.619400  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.120340  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.620046  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:14.119937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:14.619997  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.120018  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.620272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.119409  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.619421  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.120049  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.619392  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.120272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.619832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.120147  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.619419  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.119333  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.620029  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.119402  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.620236  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.119692  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.120125  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.620104  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:24.119417  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:24.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.120173  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.619362  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.119366  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.619644  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.119516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.619418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.120115  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.619593  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:29.119861  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:29.620287  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.120113  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.619452  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.120315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.619667  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.120221  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.120292  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.619449  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:34.119724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:34.620261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.119543  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.620151  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.119893  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.619442  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.119326  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.619427  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.119766  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.619711  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:39.120157  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:39.620116  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.119693  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.120192  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.619323  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.119637  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.619724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.619799  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:44.119609  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:44.619260  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.119599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.619665  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.120008  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.619297  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.119435  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.619512  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.119521  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.619320  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.619796  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.120279  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.619408  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.120076  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.619516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.119566  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.620268  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.120329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.619847  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.119981  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.620180  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.119616  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.619375  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.119240  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.619922  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.120288  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.119329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.620315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:59.120306  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:59.620183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.119877  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.619283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.119314  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.620175  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:02.120113  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:02.120198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:02.173354  992344 cri.go:89] found id: ""
	I0314 19:26:02.173388  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.173421  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:02.173430  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:02.173509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:02.213519  992344 cri.go:89] found id: ""
	I0314 19:26:02.213555  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.213567  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:02.213574  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:02.213689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:02.259387  992344 cri.go:89] found id: ""
	I0314 19:26:02.259423  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.259435  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:02.259443  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:02.259511  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:02.308335  992344 cri.go:89] found id: ""
	I0314 19:26:02.308362  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.308373  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:02.308381  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:02.308441  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:02.353065  992344 cri.go:89] found id: ""
	I0314 19:26:02.353092  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.353101  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:02.353106  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:02.353183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:02.394305  992344 cri.go:89] found id: ""
	I0314 19:26:02.394342  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.394355  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:02.394365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:02.394443  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:02.441693  992344 cri.go:89] found id: ""
	I0314 19:26:02.441731  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.441743  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:02.441751  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:02.441816  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:02.479786  992344 cri.go:89] found id: ""
	I0314 19:26:02.479810  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.479818  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:02.479827  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:02.479858  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:02.494835  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:02.494865  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:02.660069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:02.660114  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:02.660134  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:02.732148  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:02.732187  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:02.780910  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:02.780942  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:05.359638  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:05.377722  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:05.377799  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:05.436279  992344 cri.go:89] found id: ""
	I0314 19:26:05.436316  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.436330  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:05.436338  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:05.436402  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:05.482775  992344 cri.go:89] found id: ""
	I0314 19:26:05.482822  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.482853  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:05.482861  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:05.482933  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:05.542954  992344 cri.go:89] found id: ""
	I0314 19:26:05.542986  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.542996  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:05.543003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:05.543069  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:05.582596  992344 cri.go:89] found id: ""
	I0314 19:26:05.582630  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.582643  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:05.582651  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:05.582716  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:05.623720  992344 cri.go:89] found id: ""
	I0314 19:26:05.623750  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.623762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:05.623770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:05.623828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:05.669868  992344 cri.go:89] found id: ""
	I0314 19:26:05.669946  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.669962  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:05.669974  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:05.670045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:05.718786  992344 cri.go:89] found id: ""
	I0314 19:26:05.718816  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.718827  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:05.718834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:05.718905  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:05.761781  992344 cri.go:89] found id: ""
	I0314 19:26:05.761817  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.761828  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:05.761841  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:05.761856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:05.826095  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:05.826131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:05.842893  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:05.842928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:05.937536  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:05.937567  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:05.937585  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:06.013419  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:06.013465  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:08.560995  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:08.576897  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:08.576964  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:08.617367  992344 cri.go:89] found id: ""
	I0314 19:26:08.617395  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.617406  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:08.617412  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:08.617471  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:08.655448  992344 cri.go:89] found id: ""
	I0314 19:26:08.655480  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.655492  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:08.655498  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:08.656004  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:08.696167  992344 cri.go:89] found id: ""
	I0314 19:26:08.696197  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.696206  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:08.696231  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:08.696294  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:08.736061  992344 cri.go:89] found id: ""
	I0314 19:26:08.736088  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.736096  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:08.736102  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:08.736168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:08.782458  992344 cri.go:89] found id: ""
	I0314 19:26:08.782490  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.782501  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:08.782508  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:08.782585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:08.833616  992344 cri.go:89] found id: ""
	I0314 19:26:08.833647  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.833659  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:08.833667  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:08.833734  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:08.875871  992344 cri.go:89] found id: ""
	I0314 19:26:08.875900  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.875909  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:08.875914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:08.875972  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:08.921763  992344 cri.go:89] found id: ""
	I0314 19:26:08.921793  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.921804  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:08.921816  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:08.921834  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:08.937716  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:08.937748  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:09.024271  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.024295  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:09.024309  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:09.098600  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:09.098636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:09.146178  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:09.146226  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:11.698261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:11.715209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:11.715285  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:11.758631  992344 cri.go:89] found id: ""
	I0314 19:26:11.758664  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.758680  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:11.758688  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:11.758758  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:11.798229  992344 cri.go:89] found id: ""
	I0314 19:26:11.798258  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.798268  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:11.798274  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:11.798341  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:11.838801  992344 cri.go:89] found id: ""
	I0314 19:26:11.838837  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.838849  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:11.838857  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:11.838925  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:11.884460  992344 cri.go:89] found id: ""
	I0314 19:26:11.884495  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.884507  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:11.884515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:11.884577  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:11.937743  992344 cri.go:89] found id: ""
	I0314 19:26:11.937770  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.937781  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:11.937789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:11.937852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:12.007509  992344 cri.go:89] found id: ""
	I0314 19:26:12.007542  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.007552  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:12.007561  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:12.007640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:12.068478  992344 cri.go:89] found id: ""
	I0314 19:26:12.068514  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.068523  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:12.068529  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:12.068592  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:12.108658  992344 cri.go:89] found id: ""
	I0314 19:26:12.108699  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.108712  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:12.108725  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:12.108754  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:12.195134  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:12.195170  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:12.240710  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:12.240746  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:12.297470  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:12.297506  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:12.312552  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:12.312581  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:12.392069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:14.893036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:14.909532  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:14.909603  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:14.958974  992344 cri.go:89] found id: ""
	I0314 19:26:14.959001  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.959010  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:14.959016  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:14.959071  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:14.996462  992344 cri.go:89] found id: ""
	I0314 19:26:14.996496  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.996509  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:14.996516  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:14.996584  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:15.038159  992344 cri.go:89] found id: ""
	I0314 19:26:15.038192  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.038200  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:15.038214  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:15.038280  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:15.077455  992344 cri.go:89] found id: ""
	I0314 19:26:15.077486  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.077498  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:15.077506  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:15.077595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:15.117873  992344 cri.go:89] found id: ""
	I0314 19:26:15.117905  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.117914  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:15.117921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:15.117984  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:15.156493  992344 cri.go:89] found id: ""
	I0314 19:26:15.156528  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.156541  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:15.156549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:15.156615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:15.195036  992344 cri.go:89] found id: ""
	I0314 19:26:15.195065  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.195073  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:15.195079  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:15.195131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:15.237570  992344 cri.go:89] found id: ""
	I0314 19:26:15.237607  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.237619  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:15.237631  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:15.237646  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:15.323818  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:15.323871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:15.370068  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:15.370110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:15.425984  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:15.426018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.442475  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:15.442513  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:15.519714  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.019937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:18.036457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:18.036534  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:18.076226  992344 cri.go:89] found id: ""
	I0314 19:26:18.076256  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.076268  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:18.076275  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:18.076339  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:18.112355  992344 cri.go:89] found id: ""
	I0314 19:26:18.112390  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.112401  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:18.112409  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:18.112475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:18.148502  992344 cri.go:89] found id: ""
	I0314 19:26:18.148533  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.148544  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:18.148551  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:18.148625  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:18.185085  992344 cri.go:89] found id: ""
	I0314 19:26:18.185114  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.185121  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:18.185127  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:18.185192  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:18.226487  992344 cri.go:89] found id: ""
	I0314 19:26:18.226512  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.226520  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:18.226527  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:18.226595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:18.274014  992344 cri.go:89] found id: ""
	I0314 19:26:18.274044  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.274053  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:18.274062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:18.274155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:18.318696  992344 cri.go:89] found id: ""
	I0314 19:26:18.318729  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.318741  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:18.318749  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:18.318821  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:18.361430  992344 cri.go:89] found id: ""
	I0314 19:26:18.361459  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.361467  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:18.361477  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:18.361489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:18.442041  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.442062  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:18.442082  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:18.522821  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:18.522863  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:18.565896  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:18.565935  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:18.620887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:18.620924  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:21.136379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:21.153065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:21.153159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:21.198345  992344 cri.go:89] found id: ""
	I0314 19:26:21.198376  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.198386  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:21.198393  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:21.198465  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:21.240699  992344 cri.go:89] found id: ""
	I0314 19:26:21.240738  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.240747  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:21.240753  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:21.240805  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:21.280891  992344 cri.go:89] found id: ""
	I0314 19:26:21.280978  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.280994  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:21.281004  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:21.281074  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:21.320316  992344 cri.go:89] found id: ""
	I0314 19:26:21.320348  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.320360  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:21.320369  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:21.320428  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:21.367972  992344 cri.go:89] found id: ""
	I0314 19:26:21.368006  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.368018  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:21.368024  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:21.368091  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:21.406060  992344 cri.go:89] found id: ""
	I0314 19:26:21.406090  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.406101  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:21.406108  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:21.406175  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:21.450885  992344 cri.go:89] found id: ""
	I0314 19:26:21.450908  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.450927  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:21.450933  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:21.450992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:21.497391  992344 cri.go:89] found id: ""
	I0314 19:26:21.497424  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.497436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:21.497453  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:21.497471  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:21.547789  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:21.547819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:21.604433  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:21.604482  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:21.619977  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:21.620005  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:21.695604  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:21.695629  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:21.695643  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:24.274618  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:24.290815  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:24.290891  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:24.330657  992344 cri.go:89] found id: ""
	I0314 19:26:24.330694  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.330706  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:24.330718  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:24.330788  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:24.373140  992344 cri.go:89] found id: ""
	I0314 19:26:24.373192  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.373206  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:24.373214  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:24.373295  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:24.412131  992344 cri.go:89] found id: ""
	I0314 19:26:24.412161  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.412183  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:24.412191  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:24.412281  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:24.453506  992344 cri.go:89] found id: ""
	I0314 19:26:24.453535  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.453546  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:24.453554  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:24.453621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:24.495345  992344 cri.go:89] found id: ""
	I0314 19:26:24.495379  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.495391  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:24.495399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:24.495468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:24.534744  992344 cri.go:89] found id: ""
	I0314 19:26:24.534770  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.534779  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:24.534785  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:24.534847  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:24.573594  992344 cri.go:89] found id: ""
	I0314 19:26:24.573621  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.573629  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:24.573635  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:24.573685  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:24.612677  992344 cri.go:89] found id: ""
	I0314 19:26:24.612708  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.612718  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:24.612730  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:24.612747  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:24.664393  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:24.664426  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:24.679911  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:24.679945  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:24.767513  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:24.767560  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:24.767580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:24.853448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:24.853491  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.398576  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:27.414665  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:27.414749  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:27.461901  992344 cri.go:89] found id: ""
	I0314 19:26:27.461930  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.461938  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:27.461944  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:27.462009  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:27.502865  992344 cri.go:89] found id: ""
	I0314 19:26:27.502893  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.502902  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:27.502908  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:27.502966  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:27.542327  992344 cri.go:89] found id: ""
	I0314 19:26:27.542374  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.542387  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:27.542396  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:27.542484  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:27.583269  992344 cri.go:89] found id: ""
	I0314 19:26:27.583295  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.583304  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:27.583310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:27.583375  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:27.620426  992344 cri.go:89] found id: ""
	I0314 19:26:27.620467  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.620483  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:27.620491  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:27.620560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:27.659165  992344 cri.go:89] found id: ""
	I0314 19:26:27.659198  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.659214  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:27.659222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:27.659291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:27.701565  992344 cri.go:89] found id: ""
	I0314 19:26:27.701600  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.701609  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:27.701615  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:27.701706  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:27.739782  992344 cri.go:89] found id: ""
	I0314 19:26:27.739813  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.739822  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:27.739832  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:27.739847  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:27.757112  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:27.757146  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:27.844634  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:27.844670  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:27.844688  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:27.928687  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:27.928720  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.976582  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:27.976614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:30.536573  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:30.551552  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:30.551624  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.590498  992344 cri.go:89] found id: ""
	I0314 19:26:30.590528  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.590541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:30.590550  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:30.590612  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:30.629891  992344 cri.go:89] found id: ""
	I0314 19:26:30.629922  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.629945  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:30.629960  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:30.630031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:30.672557  992344 cri.go:89] found id: ""
	I0314 19:26:30.672592  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.672604  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:30.672611  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:30.672675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:30.709889  992344 cri.go:89] found id: ""
	I0314 19:26:30.709998  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.710026  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:30.710034  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:30.710103  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:30.749044  992344 cri.go:89] found id: ""
	I0314 19:26:30.749078  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.749090  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:30.749097  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:30.749167  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:30.794111  992344 cri.go:89] found id: ""
	I0314 19:26:30.794136  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.794146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:30.794154  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:30.794229  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:30.837175  992344 cri.go:89] found id: ""
	I0314 19:26:30.837204  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.837213  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:30.837220  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:30.837276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:30.875977  992344 cri.go:89] found id: ""
	I0314 19:26:30.876012  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.876026  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:30.876039  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:30.876077  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:30.965922  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:30.965963  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:31.011002  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:31.011041  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:31.067381  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:31.067415  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:31.082515  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:31.082547  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:31.158951  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:33.659376  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:33.673829  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:33.673889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:33.718619  992344 cri.go:89] found id: ""
	I0314 19:26:33.718655  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.718667  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:33.718675  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:33.718752  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:33.760408  992344 cri.go:89] found id: ""
	I0314 19:26:33.760443  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.760455  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:33.760463  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:33.760532  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:33.803648  992344 cri.go:89] found id: ""
	I0314 19:26:33.803683  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.803697  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:33.803706  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:33.803770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:33.845297  992344 cri.go:89] found id: ""
	I0314 19:26:33.845332  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.845344  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:33.845352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:33.845420  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:33.885826  992344 cri.go:89] found id: ""
	I0314 19:26:33.885862  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.885873  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:33.885881  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:33.885953  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:33.930611  992344 cri.go:89] found id: ""
	I0314 19:26:33.930641  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.930652  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:33.930659  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:33.930720  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:33.975523  992344 cri.go:89] found id: ""
	I0314 19:26:33.975558  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.975569  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:33.975592  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:33.975671  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:34.021004  992344 cri.go:89] found id: ""
	I0314 19:26:34.021039  992344 logs.go:276] 0 containers: []
	W0314 19:26:34.021048  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:34.021058  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:34.021072  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.066775  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:34.066808  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:34.123513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:34.123555  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:34.138355  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:34.138390  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:34.210698  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:34.210733  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:34.210752  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:36.801398  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:36.818486  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:36.818561  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:36.864485  992344 cri.go:89] found id: ""
	I0314 19:26:36.864510  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.864519  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:36.864525  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:36.864585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:36.908438  992344 cri.go:89] found id: ""
	I0314 19:26:36.908468  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.908478  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:36.908486  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:36.908554  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:36.947578  992344 cri.go:89] found id: ""
	I0314 19:26:36.947605  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.947613  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:36.947618  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:36.947664  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:36.985495  992344 cri.go:89] found id: ""
	I0314 19:26:36.985526  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.985537  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:36.985545  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:36.985609  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:37.027897  992344 cri.go:89] found id: ""
	I0314 19:26:37.027929  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.027947  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:37.027955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:37.028024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:37.066665  992344 cri.go:89] found id: ""
	I0314 19:26:37.066702  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.066716  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:37.066726  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:37.066818  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:37.104882  992344 cri.go:89] found id: ""
	I0314 19:26:37.104911  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.104920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:37.104926  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:37.104989  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:37.150288  992344 cri.go:89] found id: ""
	I0314 19:26:37.150318  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.150326  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:37.150338  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:37.150356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:37.207269  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:37.207314  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:37.222256  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:37.222290  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:37.305854  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:37.305879  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:37.305894  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:37.391306  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:37.391343  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:39.939379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:39.955255  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:39.955317  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:39.996585  992344 cri.go:89] found id: ""
	I0314 19:26:39.996618  992344 logs.go:276] 0 containers: []
	W0314 19:26:39.996627  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:39.996633  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:39.996698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:40.038725  992344 cri.go:89] found id: ""
	I0314 19:26:40.038761  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.038774  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:40.038782  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:40.038846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:40.080619  992344 cri.go:89] found id: ""
	I0314 19:26:40.080656  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.080668  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:40.080677  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:40.080742  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:40.122120  992344 cri.go:89] found id: ""
	I0314 19:26:40.122163  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.122174  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:40.122182  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:40.122248  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:40.161563  992344 cri.go:89] found id: ""
	I0314 19:26:40.161594  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.161605  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:40.161612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:40.161680  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:40.200236  992344 cri.go:89] found id: ""
	I0314 19:26:40.200267  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.200278  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:40.200287  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:40.200358  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:40.237537  992344 cri.go:89] found id: ""
	I0314 19:26:40.237570  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.237581  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:40.237588  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:40.237657  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:40.293038  992344 cri.go:89] found id: ""
	I0314 19:26:40.293070  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.293078  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:40.293086  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:40.293110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:40.307710  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:40.307742  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:40.385255  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:40.385278  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:40.385312  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:40.469385  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:40.469421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:40.513030  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:40.513064  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.069286  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:43.086066  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:43.086183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:43.131373  992344 cri.go:89] found id: ""
	I0314 19:26:43.131400  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.131408  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:43.131414  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:43.131491  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:43.175283  992344 cri.go:89] found id: ""
	I0314 19:26:43.175311  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.175319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:43.175325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:43.175385  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:43.214979  992344 cri.go:89] found id: ""
	I0314 19:26:43.215006  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.215014  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:43.215020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:43.215072  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:43.252071  992344 cri.go:89] found id: ""
	I0314 19:26:43.252101  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.252110  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:43.252136  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:43.252200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:43.290310  992344 cri.go:89] found id: ""
	I0314 19:26:43.290341  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.290352  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:43.290359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:43.290426  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:43.330639  992344 cri.go:89] found id: ""
	I0314 19:26:43.330673  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.330684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:43.330692  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:43.330761  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:43.372669  992344 cri.go:89] found id: ""
	I0314 19:26:43.372698  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.372706  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:43.372712  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:43.372775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:43.416118  992344 cri.go:89] found id: ""
	I0314 19:26:43.416154  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.416171  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:43.416184  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:43.416225  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:43.501495  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:43.501541  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:43.545898  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:43.545932  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.601172  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:43.601205  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:43.616307  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:43.616339  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:43.699003  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:46.199661  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:46.214256  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:46.214325  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:46.263891  992344 cri.go:89] found id: ""
	I0314 19:26:46.263921  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.263932  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:46.263940  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:46.264006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:46.303515  992344 cri.go:89] found id: ""
	I0314 19:26:46.303542  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.303551  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:46.303558  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:46.303634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:46.346323  992344 cri.go:89] found id: ""
	I0314 19:26:46.346358  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.346371  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:46.346378  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:46.346444  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:46.388459  992344 cri.go:89] found id: ""
	I0314 19:26:46.388490  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.388500  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:46.388507  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:46.388560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:46.428907  992344 cri.go:89] found id: ""
	I0314 19:26:46.428945  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.428957  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:46.428966  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:46.429032  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:46.475683  992344 cri.go:89] found id: ""
	I0314 19:26:46.475713  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.475724  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:46.475737  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:46.475803  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:46.514509  992344 cri.go:89] found id: ""
	I0314 19:26:46.514543  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.514552  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:46.514558  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:46.514621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:46.553984  992344 cri.go:89] found id: ""
	I0314 19:26:46.554012  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.554023  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:46.554036  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:46.554054  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:46.615513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:46.615548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:46.630491  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:46.630525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:46.733214  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:46.733250  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:46.733267  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:46.832662  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:46.832699  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:49.382361  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:49.398159  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:49.398220  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:49.441989  992344 cri.go:89] found id: ""
	I0314 19:26:49.442017  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.442027  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:49.442034  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:49.442110  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:49.484456  992344 cri.go:89] found id: ""
	I0314 19:26:49.484492  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.484503  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:49.484520  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:49.484587  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:49.522409  992344 cri.go:89] found id: ""
	I0314 19:26:49.522438  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.522449  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:49.522456  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:49.522509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:49.556955  992344 cri.go:89] found id: ""
	I0314 19:26:49.556983  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.556991  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:49.556996  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:49.557045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:49.597924  992344 cri.go:89] found id: ""
	I0314 19:26:49.597960  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.597971  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:49.597987  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:49.598054  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:49.635744  992344 cri.go:89] found id: ""
	I0314 19:26:49.635780  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.635793  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:49.635801  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:49.635869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:49.678085  992344 cri.go:89] found id: ""
	I0314 19:26:49.678124  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.678136  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:49.678144  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:49.678247  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:49.714483  992344 cri.go:89] found id: ""
	I0314 19:26:49.714515  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.714527  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:49.714538  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:49.714554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:49.760438  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:49.760473  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:49.818954  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:49.818992  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:49.835609  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:49.835642  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:49.928723  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:49.928747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:49.928759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:52.517455  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:52.534986  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:52.535066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:52.580240  992344 cri.go:89] found id: ""
	I0314 19:26:52.580279  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.580292  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:52.580301  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:52.580367  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:52.644053  992344 cri.go:89] found id: ""
	I0314 19:26:52.644085  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.644096  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:52.644103  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:52.644171  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:52.706892  992344 cri.go:89] found id: ""
	I0314 19:26:52.706919  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.706928  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:52.706935  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:52.706986  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:52.761039  992344 cri.go:89] found id: ""
	I0314 19:26:52.761077  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.761090  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:52.761099  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:52.761173  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:52.806217  992344 cri.go:89] found id: ""
	I0314 19:26:52.806251  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.806263  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:52.806271  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:52.806415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:52.848417  992344 cri.go:89] found id: ""
	I0314 19:26:52.848448  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.848457  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:52.848464  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:52.848527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:52.890639  992344 cri.go:89] found id: ""
	I0314 19:26:52.890674  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.890687  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:52.890695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:52.890775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:52.934637  992344 cri.go:89] found id: ""
	I0314 19:26:52.934666  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.934677  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:52.934690  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:52.934707  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:52.949797  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:52.949825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:53.033720  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:53.033751  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:53.033766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:53.113919  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:53.113960  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:53.163483  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:53.163525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:55.718119  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:55.733183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:55.733276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:55.778015  992344 cri.go:89] found id: ""
	I0314 19:26:55.778042  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.778050  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:55.778057  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:55.778146  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:55.829955  992344 cri.go:89] found id: ""
	I0314 19:26:55.829996  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.830011  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:55.830019  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:55.830089  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:55.872198  992344 cri.go:89] found id: ""
	I0314 19:26:55.872247  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.872260  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:55.872268  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:55.872327  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:55.916604  992344 cri.go:89] found id: ""
	I0314 19:26:55.916637  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.916649  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:55.916657  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:55.916725  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:55.957028  992344 cri.go:89] found id: ""
	I0314 19:26:55.957051  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.957060  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:55.957065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:55.957118  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:55.996640  992344 cri.go:89] found id: ""
	I0314 19:26:55.996671  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.996684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:55.996695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:55.996750  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:56.036638  992344 cri.go:89] found id: ""
	I0314 19:26:56.036688  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.036701  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:56.036709  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:56.036777  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:56.072594  992344 cri.go:89] found id: ""
	I0314 19:26:56.072624  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.072633  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:56.072643  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:56.072657  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:56.129011  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:56.129044  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:56.143042  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:56.143075  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:56.232545  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:56.232574  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:56.232589  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:56.317471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:56.317517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:58.864325  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:58.879029  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:58.879108  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:58.918490  992344 cri.go:89] found id: ""
	I0314 19:26:58.918519  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.918526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:58.918533  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:58.918598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:58.963392  992344 cri.go:89] found id: ""
	I0314 19:26:58.963423  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.963431  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:58.963437  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:58.963502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:59.007104  992344 cri.go:89] found id: ""
	I0314 19:26:59.007146  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.007158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:59.007166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:59.007235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:59.050075  992344 cri.go:89] found id: ""
	I0314 19:26:59.050114  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.050127  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:59.050138  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:59.050204  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:59.090262  992344 cri.go:89] found id: ""
	I0314 19:26:59.090289  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.090298  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:59.090303  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:59.090355  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:59.130556  992344 cri.go:89] found id: ""
	I0314 19:26:59.130584  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.130592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:59.130598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:59.130659  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:59.170640  992344 cri.go:89] found id: ""
	I0314 19:26:59.170670  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.170680  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:59.170689  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:59.170769  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:59.206456  992344 cri.go:89] found id: ""
	I0314 19:26:59.206494  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.206503  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:59.206513  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:59.206533  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:59.285760  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:59.285781  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:59.285793  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:59.363143  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:59.363182  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:59.415614  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:59.415655  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:59.470619  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:59.470661  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:01.987397  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:02.004152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:02.004243  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:02.050022  992344 cri.go:89] found id: ""
	I0314 19:27:02.050056  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.050068  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:02.050075  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:02.050144  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:02.089639  992344 cri.go:89] found id: ""
	I0314 19:27:02.089666  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.089674  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:02.089680  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:02.089740  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:02.128368  992344 cri.go:89] found id: ""
	I0314 19:27:02.128400  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.128409  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:02.128415  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:02.128468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:02.165609  992344 cri.go:89] found id: ""
	I0314 19:27:02.165651  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.165664  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:02.165672  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:02.165745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:02.204317  992344 cri.go:89] found id: ""
	I0314 19:27:02.204347  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.204359  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:02.204367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:02.204436  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:02.247897  992344 cri.go:89] found id: ""
	I0314 19:27:02.247931  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.247943  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:02.247951  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:02.248025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:02.287938  992344 cri.go:89] found id: ""
	I0314 19:27:02.287967  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.287979  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:02.287985  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:02.288057  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:02.324712  992344 cri.go:89] found id: ""
	I0314 19:27:02.324739  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.324751  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:02.324762  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:02.324779  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:02.400908  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:02.400932  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:02.400953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:02.489797  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:02.489830  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:02.540134  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:02.540168  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:02.599093  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:02.599128  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:05.115036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:05.130479  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:05.130562  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:05.174573  992344 cri.go:89] found id: ""
	I0314 19:27:05.174605  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.174617  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:05.174624  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:05.174689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:05.212508  992344 cri.go:89] found id: ""
	I0314 19:27:05.212535  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.212546  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:05.212554  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:05.212621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:05.250714  992344 cri.go:89] found id: ""
	I0314 19:27:05.250750  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.250762  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:05.250770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:05.250839  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:05.291691  992344 cri.go:89] found id: ""
	I0314 19:27:05.291714  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.291722  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:05.291728  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:05.291775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:05.332275  992344 cri.go:89] found id: ""
	I0314 19:27:05.332302  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.332311  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:05.332318  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:05.332384  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:05.370048  992344 cri.go:89] found id: ""
	I0314 19:27:05.370075  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.370084  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:05.370090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:05.370163  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:05.413797  992344 cri.go:89] found id: ""
	I0314 19:27:05.413825  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.413836  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:05.413844  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:05.413909  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:05.454295  992344 cri.go:89] found id: ""
	I0314 19:27:05.454321  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.454329  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:05.454341  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:05.454359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:05.509578  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:05.509614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:05.525317  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:05.525347  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:05.607550  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:05.607576  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:05.607593  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:05.690865  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:05.690904  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.233183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:08.249612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:08.249679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:08.298188  992344 cri.go:89] found id: ""
	I0314 19:27:08.298226  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.298238  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:08.298247  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:08.298310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:08.339102  992344 cri.go:89] found id: ""
	I0314 19:27:08.339132  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.339141  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:08.339148  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:08.339208  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:08.377029  992344 cri.go:89] found id: ""
	I0314 19:27:08.377060  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.377068  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:08.377074  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:08.377131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:08.414418  992344 cri.go:89] found id: ""
	I0314 19:27:08.414450  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.414461  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:08.414468  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:08.414528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:08.454027  992344 cri.go:89] found id: ""
	I0314 19:27:08.454057  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.454068  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:08.454076  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:08.454134  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:08.494818  992344 cri.go:89] found id: ""
	I0314 19:27:08.494847  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.494856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:08.494863  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:08.494927  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:08.534522  992344 cri.go:89] found id: ""
	I0314 19:27:08.534557  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.534567  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:08.534575  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:08.534637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:08.572164  992344 cri.go:89] found id: ""
	I0314 19:27:08.572197  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.572241  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:08.572257  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:08.572275  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:08.588223  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:08.588261  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:08.675851  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:08.675877  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:08.675889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:08.763975  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:08.764014  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.813516  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:08.813552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.370525  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:11.385556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:11.385645  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:11.426788  992344 cri.go:89] found id: ""
	I0314 19:27:11.426823  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.426831  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:11.426837  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:11.426910  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:11.465752  992344 cri.go:89] found id: ""
	I0314 19:27:11.465786  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.465794  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:11.465801  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:11.465849  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:11.506855  992344 cri.go:89] found id: ""
	I0314 19:27:11.506890  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.506904  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:11.506912  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:11.506973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:11.548844  992344 cri.go:89] found id: ""
	I0314 19:27:11.548880  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.548891  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:11.548900  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:11.548960  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:11.590828  992344 cri.go:89] found id: ""
	I0314 19:27:11.590861  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.590872  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:11.590880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:11.590952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:11.631863  992344 cri.go:89] found id: ""
	I0314 19:27:11.631892  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.631904  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:11.631913  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:11.631975  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:11.670204  992344 cri.go:89] found id: ""
	I0314 19:27:11.670230  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.670238  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:11.670244  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:11.670293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:11.711946  992344 cri.go:89] found id: ""
	I0314 19:27:11.711980  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.711991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:11.712005  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:11.712026  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.766647  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:11.766682  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:11.784449  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:11.784475  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:11.866503  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:11.866536  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:11.866552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:11.952506  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:11.952538  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:14.502903  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:14.518020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:14.518084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:14.555486  992344 cri.go:89] found id: ""
	I0314 19:27:14.555528  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.555541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:14.555552  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:14.555615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:14.592068  992344 cri.go:89] found id: ""
	I0314 19:27:14.592102  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.592113  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:14.592121  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:14.592186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:14.633353  992344 cri.go:89] found id: ""
	I0314 19:27:14.633408  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.633418  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:14.633425  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:14.633490  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:14.670897  992344 cri.go:89] found id: ""
	I0314 19:27:14.670935  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.670947  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:14.670955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:14.671024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:14.713838  992344 cri.go:89] found id: ""
	I0314 19:27:14.713874  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.713884  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:14.713890  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:14.713957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:14.751113  992344 cri.go:89] found id: ""
	I0314 19:27:14.751143  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.751151  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:14.751158  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:14.751209  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:14.792485  992344 cri.go:89] found id: ""
	I0314 19:27:14.792518  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.792535  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:14.792542  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:14.792606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:14.839250  992344 cri.go:89] found id: ""
	I0314 19:27:14.839284  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.839297  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:14.839309  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:14.839325  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:14.880384  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:14.880421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:14.941515  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:14.941549  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:14.958810  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:14.958836  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:15.048586  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:15.048610  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:15.048625  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:17.640280  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:17.655841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:17.655901  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:17.698205  992344 cri.go:89] found id: ""
	I0314 19:27:17.698242  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.698254  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:17.698261  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:17.698315  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:17.740854  992344 cri.go:89] found id: ""
	I0314 19:27:17.740892  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.740903  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:17.740910  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:17.740980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:17.783317  992344 cri.go:89] found id: ""
	I0314 19:27:17.783409  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.783426  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:17.783434  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:17.783499  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:17.823514  992344 cri.go:89] found id: ""
	I0314 19:27:17.823541  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.823550  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:17.823556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:17.823606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:17.859249  992344 cri.go:89] found id: ""
	I0314 19:27:17.859288  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.859301  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:17.859310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:17.859386  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:17.900636  992344 cri.go:89] found id: ""
	I0314 19:27:17.900670  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.900688  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:17.900703  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:17.900770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:17.939927  992344 cri.go:89] found id: ""
	I0314 19:27:17.939959  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.939970  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:17.939979  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:17.940048  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:17.980507  992344 cri.go:89] found id: ""
	I0314 19:27:17.980539  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.980551  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:17.980563  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:17.980580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:18.037887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:18.037925  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:18.054506  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:18.054544  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:18.129987  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:18.130006  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:18.130018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:18.210364  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:18.210400  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:20.758599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:20.775419  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:20.775480  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:20.814427  992344 cri.go:89] found id: ""
	I0314 19:27:20.814457  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.814469  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:20.814476  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:20.814528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:20.851020  992344 cri.go:89] found id: ""
	I0314 19:27:20.851056  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.851069  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:20.851077  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:20.851150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:20.894746  992344 cri.go:89] found id: ""
	I0314 19:27:20.894775  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.894784  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:20.894790  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:20.894856  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:20.932852  992344 cri.go:89] found id: ""
	I0314 19:27:20.932884  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.932895  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:20.932903  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:20.932962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:20.977294  992344 cri.go:89] found id: ""
	I0314 19:27:20.977329  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.977341  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:20.977349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:20.977417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:21.018980  992344 cri.go:89] found id: ""
	I0314 19:27:21.019016  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.019027  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:21.019036  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:21.019102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:21.058764  992344 cri.go:89] found id: ""
	I0314 19:27:21.058817  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.058832  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:21.058841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:21.058915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:21.098126  992344 cri.go:89] found id: ""
	I0314 19:27:21.098168  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.098181  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:21.098194  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:21.098211  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:21.154456  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:21.154490  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:21.170919  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:21.170950  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:21.247945  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:21.247973  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:21.247991  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:21.345152  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:21.345193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:23.900146  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:23.917834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:23.917896  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:23.959759  992344 cri.go:89] found id: ""
	I0314 19:27:23.959787  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.959800  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:23.959808  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:23.959875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:23.999841  992344 cri.go:89] found id: ""
	I0314 19:27:23.999871  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.999880  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:23.999887  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:23.999942  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:24.044031  992344 cri.go:89] found id: ""
	I0314 19:27:24.044063  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.044072  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:24.044078  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:24.044149  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:24.089895  992344 cri.go:89] found id: ""
	I0314 19:27:24.089931  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.089944  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:24.089955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:24.090023  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:24.131286  992344 cri.go:89] found id: ""
	I0314 19:27:24.131319  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.131331  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:24.131338  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:24.131409  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:24.169376  992344 cri.go:89] found id: ""
	I0314 19:27:24.169408  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.169420  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:24.169428  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:24.169495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:24.215123  992344 cri.go:89] found id: ""
	I0314 19:27:24.215150  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.215159  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:24.215165  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:24.215219  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:24.257440  992344 cri.go:89] found id: ""
	I0314 19:27:24.257476  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.257484  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:24.257494  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:24.257508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.311885  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:24.311916  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:24.326375  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:24.326403  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:24.403176  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:24.403207  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:24.403227  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:24.485890  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:24.485928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.032675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:27.050221  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:27.050310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:27.091708  992344 cri.go:89] found id: ""
	I0314 19:27:27.091739  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.091750  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:27.091761  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:27.091828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:27.135279  992344 cri.go:89] found id: ""
	I0314 19:27:27.135317  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.135329  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:27.135337  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:27.135407  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:27.178163  992344 cri.go:89] found id: ""
	I0314 19:27:27.178194  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.178203  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:27.178209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:27.178259  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:27.220298  992344 cri.go:89] found id: ""
	I0314 19:27:27.220331  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.220341  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:27.220367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:27.220423  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:27.262087  992344 cri.go:89] found id: ""
	I0314 19:27:27.262122  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.262135  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:27.262143  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:27.262305  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:27.304543  992344 cri.go:89] found id: ""
	I0314 19:27:27.304576  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.304587  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:27.304597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:27.304668  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:27.343860  992344 cri.go:89] found id: ""
	I0314 19:27:27.343889  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.343899  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:27.343905  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:27.343974  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:27.383608  992344 cri.go:89] found id: ""
	I0314 19:27:27.383639  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.383649  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:27.383659  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:27.383673  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:27.398443  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:27.398478  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:27.485215  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:27.485240  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:27.485254  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:27.564067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:27.564110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.608472  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:27.608511  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:30.169228  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:30.183876  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:30.183952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:30.229357  992344 cri.go:89] found id: ""
	I0314 19:27:30.229390  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.229401  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:30.229407  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:30.229474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:30.272970  992344 cri.go:89] found id: ""
	I0314 19:27:30.273007  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.273021  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:30.273030  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:30.273116  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:30.314939  992344 cri.go:89] found id: ""
	I0314 19:27:30.314968  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.314976  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:30.314982  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:30.315031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:30.350602  992344 cri.go:89] found id: ""
	I0314 19:27:30.350633  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.350644  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:30.350652  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:30.350739  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:30.393907  992344 cri.go:89] found id: ""
	I0314 19:27:30.393939  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.393950  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:30.393958  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:30.394029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:30.431943  992344 cri.go:89] found id: ""
	I0314 19:27:30.431974  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.431983  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:30.431991  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:30.432058  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:30.471873  992344 cri.go:89] found id: ""
	I0314 19:27:30.471900  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.471910  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:30.471918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:30.471981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:30.508842  992344 cri.go:89] found id: ""
	I0314 19:27:30.508865  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.508872  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:30.508882  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:30.508896  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:30.587441  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:30.587471  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:30.587489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:30.670580  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:30.670618  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:30.719846  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:30.719882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:30.779463  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:30.779508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.296251  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:33.311393  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:33.311452  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:33.351846  992344 cri.go:89] found id: ""
	I0314 19:27:33.351879  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.351889  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:33.351898  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:33.351965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:33.396392  992344 cri.go:89] found id: ""
	I0314 19:27:33.396494  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.396523  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:33.396546  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:33.396637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:33.451093  992344 cri.go:89] found id: ""
	I0314 19:27:33.451120  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.451130  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:33.451149  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:33.451225  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:33.511427  992344 cri.go:89] found id: ""
	I0314 19:27:33.511474  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.511487  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:33.511495  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:33.511570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:33.560459  992344 cri.go:89] found id: ""
	I0314 19:27:33.560488  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.560500  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:33.560509  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:33.560579  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:33.601454  992344 cri.go:89] found id: ""
	I0314 19:27:33.601491  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.601503  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:33.601512  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:33.601588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:33.640991  992344 cri.go:89] found id: ""
	I0314 19:27:33.641029  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.641042  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:33.641050  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:33.641115  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:33.684359  992344 cri.go:89] found id: ""
	I0314 19:27:33.684390  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.684398  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:33.684412  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:33.684436  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.699551  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:33.699583  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:33.781859  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:33.781893  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:33.781909  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:33.864992  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:33.865036  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:33.911670  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:33.911712  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.466570  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:36.483515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:36.483611  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:36.522488  992344 cri.go:89] found id: ""
	I0314 19:27:36.522521  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.522533  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:36.522549  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:36.522607  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:36.561676  992344 cri.go:89] found id: ""
	I0314 19:27:36.561714  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.561728  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:36.561737  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:36.561810  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:36.604512  992344 cri.go:89] found id: ""
	I0314 19:27:36.604547  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.604559  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:36.604568  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:36.604640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:36.645387  992344 cri.go:89] found id: ""
	I0314 19:27:36.645416  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.645425  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:36.645430  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:36.645495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:36.682951  992344 cri.go:89] found id: ""
	I0314 19:27:36.682976  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.682984  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:36.682989  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:36.683040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:36.725464  992344 cri.go:89] found id: ""
	I0314 19:27:36.725517  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.725530  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:36.725538  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:36.725601  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:36.766542  992344 cri.go:89] found id: ""
	I0314 19:27:36.766578  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.766590  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:36.766598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:36.766663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:36.809745  992344 cri.go:89] found id: ""
	I0314 19:27:36.809773  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.809782  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:36.809791  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:36.809805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.863035  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:36.863069  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:36.877162  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:36.877195  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:36.952727  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:36.952747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:36.952759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:37.035914  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:37.035953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:39.581600  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:39.595798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:39.595875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:39.635374  992344 cri.go:89] found id: ""
	I0314 19:27:39.635406  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.635418  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:39.635426  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:39.635488  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:39.674527  992344 cri.go:89] found id: ""
	I0314 19:27:39.674560  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.674571  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:39.674579  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:39.674649  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:39.714313  992344 cri.go:89] found id: ""
	I0314 19:27:39.714357  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.714370  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:39.714380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:39.714449  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:39.754346  992344 cri.go:89] found id: ""
	I0314 19:27:39.754383  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.754395  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:39.754402  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:39.754468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:39.799448  992344 cri.go:89] found id: ""
	I0314 19:27:39.799481  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.799493  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:39.799500  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:39.799551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:39.841550  992344 cri.go:89] found id: ""
	I0314 19:27:39.841582  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.841592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:39.841601  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:39.841673  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:39.878581  992344 cri.go:89] found id: ""
	I0314 19:27:39.878612  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.878624  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:39.878630  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:39.878681  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:39.917419  992344 cri.go:89] found id: ""
	I0314 19:27:39.917444  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.917454  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:39.917465  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:39.917480  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:39.976304  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:39.976340  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:39.993786  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:39.993825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:40.074428  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.074458  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:40.074481  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:40.156135  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:40.156177  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:42.700758  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:42.716600  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:42.716672  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:42.763646  992344 cri.go:89] found id: ""
	I0314 19:27:42.763682  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.763694  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:42.763702  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:42.763770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:42.804246  992344 cri.go:89] found id: ""
	I0314 19:27:42.804280  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.804288  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:42.804295  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:42.804360  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:42.847415  992344 cri.go:89] found id: ""
	I0314 19:27:42.847445  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.847455  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:42.847463  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:42.847527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:42.884340  992344 cri.go:89] found id: ""
	I0314 19:27:42.884376  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.884386  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:42.884395  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:42.884464  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:42.923583  992344 cri.go:89] found id: ""
	I0314 19:27:42.923615  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.923634  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:42.923642  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:42.923704  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:42.969164  992344 cri.go:89] found id: ""
	I0314 19:27:42.969195  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.969207  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:42.969215  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:42.969291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:43.013760  992344 cri.go:89] found id: ""
	I0314 19:27:43.013793  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.013802  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:43.013808  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:43.013881  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:43.056930  992344 cri.go:89] found id: ""
	I0314 19:27:43.056964  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.056976  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:43.056989  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:43.057004  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:43.145067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:43.145104  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:43.196679  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:43.196714  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:43.252329  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:43.252363  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:43.268635  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:43.268663  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:43.353391  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:45.853793  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:45.867904  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:45.867971  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:45.909352  992344 cri.go:89] found id: ""
	I0314 19:27:45.909376  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.909387  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:45.909394  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:45.909451  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:45.950885  992344 cri.go:89] found id: ""
	I0314 19:27:45.950920  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.950931  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:45.950939  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:45.951006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:45.987907  992344 cri.go:89] found id: ""
	I0314 19:27:45.987940  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.987951  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:45.987959  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:45.988025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:46.026894  992344 cri.go:89] found id: ""
	I0314 19:27:46.026930  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.026942  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:46.026950  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:46.027047  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:46.074867  992344 cri.go:89] found id: ""
	I0314 19:27:46.074901  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.074911  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:46.074918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:46.074981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:46.111516  992344 cri.go:89] found id: ""
	I0314 19:27:46.111551  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.111562  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:46.111570  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:46.111633  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:46.151560  992344 cri.go:89] found id: ""
	I0314 19:27:46.151590  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.151601  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:46.151610  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:46.151674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:46.191684  992344 cri.go:89] found id: ""
	I0314 19:27:46.191719  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.191730  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:46.191742  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:46.191757  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:46.245152  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:46.245189  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:46.261705  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:46.261741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:46.342381  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:46.342409  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:46.342424  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:46.437995  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:46.438031  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:48.981814  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:48.998620  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:48.998689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:49.040608  992344 cri.go:89] found id: ""
	I0314 19:27:49.040643  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.040653  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:49.040659  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:49.040711  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:49.083505  992344 cri.go:89] found id: ""
	I0314 19:27:49.083531  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.083539  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:49.083544  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:49.083606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:49.127355  992344 cri.go:89] found id: ""
	I0314 19:27:49.127383  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.127391  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:49.127399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:49.127472  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:49.165694  992344 cri.go:89] found id: ""
	I0314 19:27:49.165726  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.165738  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:49.165746  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:49.165813  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:49.209407  992344 cri.go:89] found id: ""
	I0314 19:27:49.209440  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.209449  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:49.209455  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:49.209516  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:49.250450  992344 cri.go:89] found id: ""
	I0314 19:27:49.250482  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.250493  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:49.250499  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:49.250560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:49.294041  992344 cri.go:89] found id: ""
	I0314 19:27:49.294070  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.294079  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:49.294085  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:49.294150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:49.333664  992344 cri.go:89] found id: ""
	I0314 19:27:49.333706  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.333719  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:49.333731  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:49.333749  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:49.348323  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:49.348351  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:49.428896  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:49.428917  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:49.428929  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:49.510395  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:49.510431  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:49.553630  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:49.553669  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.105763  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:52.120888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:52.120956  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:52.158143  992344 cri.go:89] found id: ""
	I0314 19:27:52.158174  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.158188  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:52.158196  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:52.158271  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:52.198254  992344 cri.go:89] found id: ""
	I0314 19:27:52.198285  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.198294  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:52.198299  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:52.198372  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:52.237973  992344 cri.go:89] found id: ""
	I0314 19:27:52.238001  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.238009  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:52.238015  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:52.238066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:52.283766  992344 cri.go:89] found id: ""
	I0314 19:27:52.283798  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.283809  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:52.283817  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:52.283889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:52.325861  992344 cri.go:89] found id: ""
	I0314 19:27:52.325896  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.325906  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:52.325914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:52.325983  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:52.367582  992344 cri.go:89] found id: ""
	I0314 19:27:52.367612  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.367622  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:52.367631  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:52.367698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:52.405009  992344 cri.go:89] found id: ""
	I0314 19:27:52.405043  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.405054  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:52.405062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:52.405125  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:52.447560  992344 cri.go:89] found id: ""
	I0314 19:27:52.447584  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.447594  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:52.447605  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:52.447620  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:52.519023  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:52.519048  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:52.519062  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:52.603256  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:52.603297  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:52.650926  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:52.650957  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.708743  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:52.708784  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:55.225549  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:55.242914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:55.242992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:55.284249  992344 cri.go:89] found id: ""
	I0314 19:27:55.284280  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.284291  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:55.284298  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:55.284362  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:55.333784  992344 cri.go:89] found id: ""
	I0314 19:27:55.333821  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.333833  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:55.333840  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:55.333916  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:55.375444  992344 cri.go:89] found id: ""
	I0314 19:27:55.375498  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.375511  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:55.375519  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:55.375598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:55.416225  992344 cri.go:89] found id: ""
	I0314 19:27:55.416259  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.416269  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:55.416276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:55.416340  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:55.461097  992344 cri.go:89] found id: ""
	I0314 19:27:55.461138  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.461150  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:55.461166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:55.461235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:55.504621  992344 cri.go:89] found id: ""
	I0314 19:27:55.504659  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.504670  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:55.504679  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:55.504755  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:55.545075  992344 cri.go:89] found id: ""
	I0314 19:27:55.545111  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.545123  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:55.545130  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:55.545221  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:55.584137  992344 cri.go:89] found id: ""
	I0314 19:27:55.584197  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.584235  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:55.584252  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:55.584274  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:55.642705  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:55.642741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:55.657487  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:55.657516  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:55.738379  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:55.738414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:55.738432  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:55.827582  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:55.827621  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:58.374265  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:58.389764  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:58.389878  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:58.431760  992344 cri.go:89] found id: ""
	I0314 19:27:58.431798  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.431810  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:58.431818  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:58.431880  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:58.471389  992344 cri.go:89] found id: ""
	I0314 19:27:58.471415  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.471424  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:58.471430  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:58.471478  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:58.508875  992344 cri.go:89] found id: ""
	I0314 19:27:58.508903  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.508910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:58.508916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:58.508965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:58.546016  992344 cri.go:89] found id: ""
	I0314 19:27:58.546042  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.546051  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:58.546057  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:58.546106  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:58.586319  992344 cri.go:89] found id: ""
	I0314 19:27:58.586346  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.586354  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:58.586360  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:58.586414  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:58.625381  992344 cri.go:89] found id: ""
	I0314 19:27:58.625411  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.625423  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:58.625431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:58.625494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:58.663016  992344 cri.go:89] found id: ""
	I0314 19:27:58.663047  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.663059  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:58.663068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:58.663131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:58.703100  992344 cri.go:89] found id: ""
	I0314 19:27:58.703144  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.703159  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:58.703172  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:58.703190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:58.755081  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:58.755116  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:58.770547  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:58.770577  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:58.850354  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:58.850379  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:58.850395  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:58.944115  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:58.944152  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.489937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:01.505233  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:01.505309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:01.544381  992344 cri.go:89] found id: ""
	I0314 19:28:01.544417  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.544429  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:01.544437  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:01.544502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:01.582639  992344 cri.go:89] found id: ""
	I0314 19:28:01.582668  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.582676  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:01.582684  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:01.582745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:01.621926  992344 cri.go:89] found id: ""
	I0314 19:28:01.621957  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.621968  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:01.621976  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:01.622040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:01.659749  992344 cri.go:89] found id: ""
	I0314 19:28:01.659779  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.659791  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:01.659798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:01.659869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:01.696467  992344 cri.go:89] found id: ""
	I0314 19:28:01.696497  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.696505  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:01.696511  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:01.696570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:01.735273  992344 cri.go:89] found id: ""
	I0314 19:28:01.735301  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.735310  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:01.735316  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:01.735381  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:01.777051  992344 cri.go:89] found id: ""
	I0314 19:28:01.777081  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.777090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:01.777096  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:01.777155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:01.820851  992344 cri.go:89] found id: ""
	I0314 19:28:01.820883  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.820894  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:01.820911  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:01.820926  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:01.874599  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:01.874632  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:01.888971  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:01.889007  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:01.971786  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:01.971806  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:01.971819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:02.064070  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:02.064114  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:04.610064  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:04.625349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:04.625417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:04.664254  992344 cri.go:89] found id: ""
	I0314 19:28:04.664284  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.664293  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:04.664299  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:04.664348  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:04.704466  992344 cri.go:89] found id: ""
	I0314 19:28:04.704502  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.704514  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:04.704523  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:04.704588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:04.745733  992344 cri.go:89] found id: ""
	I0314 19:28:04.745762  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.745773  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:04.745781  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:04.745846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:04.790435  992344 cri.go:89] found id: ""
	I0314 19:28:04.790465  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.790477  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:04.790485  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:04.790550  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:04.829215  992344 cri.go:89] found id: ""
	I0314 19:28:04.829255  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.829268  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:04.829276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:04.829343  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:04.874200  992344 cri.go:89] found id: ""
	I0314 19:28:04.874234  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.874246  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:04.874253  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:04.874318  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:04.915882  992344 cri.go:89] found id: ""
	I0314 19:28:04.915909  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.915920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:04.915928  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:04.915994  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:04.954000  992344 cri.go:89] found id: ""
	I0314 19:28:04.954027  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.954038  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:04.954049  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:04.954063  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:04.996511  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:04.996540  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:05.049608  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:05.049644  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:05.064401  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:05.064437  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:05.145169  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:05.145189  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:05.145202  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:07.734535  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:07.765003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:07.765099  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:07.814489  992344 cri.go:89] found id: ""
	I0314 19:28:07.814518  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.814526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:07.814532  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:07.814595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:07.854337  992344 cri.go:89] found id: ""
	I0314 19:28:07.854368  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.854378  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:07.854384  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:07.854455  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:07.894430  992344 cri.go:89] found id: ""
	I0314 19:28:07.894465  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.894479  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:07.894487  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:07.894551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:07.939473  992344 cri.go:89] found id: ""
	I0314 19:28:07.939504  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.939515  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:07.939524  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:07.939591  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:07.982584  992344 cri.go:89] found id: ""
	I0314 19:28:07.982627  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.982640  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:07.982649  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:07.982710  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:08.020038  992344 cri.go:89] found id: ""
	I0314 19:28:08.020065  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.020074  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:08.020080  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:08.020138  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:08.058377  992344 cri.go:89] found id: ""
	I0314 19:28:08.058412  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.058423  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:08.058431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:08.058509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:08.096241  992344 cri.go:89] found id: ""
	I0314 19:28:08.096273  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.096284  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:08.096294  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:08.096308  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:08.174276  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:08.174315  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:08.221249  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:08.221282  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:08.273899  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:08.273930  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:08.290166  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:08.290193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:08.382154  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:10.882385  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:10.898126  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:10.898200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:10.939972  992344 cri.go:89] found id: ""
	I0314 19:28:10.940001  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.940012  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:10.940019  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:10.940084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:10.985154  992344 cri.go:89] found id: ""
	I0314 19:28:10.985187  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.985199  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:10.985212  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:10.985278  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:11.023955  992344 cri.go:89] found id: ""
	I0314 19:28:11.024004  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.024017  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:11.024025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:11.024094  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:11.065508  992344 cri.go:89] found id: ""
	I0314 19:28:11.065534  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.065543  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:11.065549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:11.065620  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:11.103903  992344 cri.go:89] found id: ""
	I0314 19:28:11.103930  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.103938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:11.103944  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:11.103997  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:11.146820  992344 cri.go:89] found id: ""
	I0314 19:28:11.146856  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.146866  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:11.146873  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:11.146930  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:11.195840  992344 cri.go:89] found id: ""
	I0314 19:28:11.195871  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.195880  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:11.195888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:11.195957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:11.237594  992344 cri.go:89] found id: ""
	I0314 19:28:11.237628  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.237647  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:11.237658  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:11.237671  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:11.297323  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:11.297356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:11.313785  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:11.313815  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:11.393416  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:11.393444  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:11.393461  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:11.472938  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:11.472972  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:14.025870  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:14.039597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:14.039667  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:14.076786  992344 cri.go:89] found id: ""
	I0314 19:28:14.076822  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.076834  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:14.076842  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:14.076911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:14.114754  992344 cri.go:89] found id: ""
	I0314 19:28:14.114796  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.114815  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:14.114823  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:14.114893  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:14.158360  992344 cri.go:89] found id: ""
	I0314 19:28:14.158396  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.158408  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:14.158417  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:14.158489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:14.208587  992344 cri.go:89] found id: ""
	I0314 19:28:14.208626  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.208638  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:14.208646  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:14.208712  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:14.253013  992344 cri.go:89] found id: ""
	I0314 19:28:14.253049  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.253062  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:14.253071  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:14.253142  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:14.313793  992344 cri.go:89] found id: ""
	I0314 19:28:14.313830  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.313843  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:14.313851  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:14.313918  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:14.352044  992344 cri.go:89] found id: ""
	I0314 19:28:14.352076  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.352087  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:14.352094  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:14.352161  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:14.389393  992344 cri.go:89] found id: ""
	I0314 19:28:14.389427  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.389436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:14.389446  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:14.389464  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:14.447873  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:14.447914  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:14.462610  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:14.462636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:14.543393  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:14.543414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:14.543427  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:14.628147  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:14.628190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.177617  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:17.193408  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:17.193481  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:17.233133  992344 cri.go:89] found id: ""
	I0314 19:28:17.233161  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.233170  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:17.233183  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:17.233252  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:17.270429  992344 cri.go:89] found id: ""
	I0314 19:28:17.270459  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.270471  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:17.270479  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:17.270559  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:17.309915  992344 cri.go:89] found id: ""
	I0314 19:28:17.309939  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.309947  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:17.309952  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:17.309999  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:17.347157  992344 cri.go:89] found id: ""
	I0314 19:28:17.347188  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.347199  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:17.347206  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:17.347269  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:17.388837  992344 cri.go:89] found id: ""
	I0314 19:28:17.388866  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.388877  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:17.388884  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:17.388948  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:17.425945  992344 cri.go:89] found id: ""
	I0314 19:28:17.425969  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.425977  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:17.425983  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:17.426051  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:17.470291  992344 cri.go:89] found id: ""
	I0314 19:28:17.470320  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.470356  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:17.470365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:17.470424  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:17.507512  992344 cri.go:89] found id: ""
	I0314 19:28:17.507541  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.507549  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:17.507559  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:17.507575  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.550148  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:17.550186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:17.603728  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:17.603759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:17.619160  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:17.619186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:17.699649  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:17.699683  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:17.699701  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:20.284486  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:20.300132  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:20.300198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:20.341566  992344 cri.go:89] found id: ""
	I0314 19:28:20.341608  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.341620  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:20.341629  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:20.341700  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:20.379527  992344 cri.go:89] found id: ""
	I0314 19:28:20.379555  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.379562  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:20.379568  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:20.379640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:20.425871  992344 cri.go:89] found id: ""
	I0314 19:28:20.425902  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.425910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:20.425916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:20.425980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:20.464939  992344 cri.go:89] found id: ""
	I0314 19:28:20.464979  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.464993  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:20.465003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:20.465075  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:20.500954  992344 cri.go:89] found id: ""
	I0314 19:28:20.500982  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.500993  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:20.501001  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:20.501063  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:20.542049  992344 cri.go:89] found id: ""
	I0314 19:28:20.542080  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.542090  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:20.542098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:20.542178  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:20.577298  992344 cri.go:89] found id: ""
	I0314 19:28:20.577325  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.577333  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:20.577340  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:20.577389  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.618467  992344 cri.go:89] found id: ""
	I0314 19:28:20.618498  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.618511  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:20.618523  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:20.618537  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:20.694238  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:20.694280  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:20.694298  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:20.778845  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:20.778882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:20.821575  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:20.821606  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:20.876025  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:20.876061  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.391129  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:23.408183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:23.408276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:23.449128  992344 cri.go:89] found id: ""
	I0314 19:28:23.449169  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.449180  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:23.449186  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:23.449276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:23.486168  992344 cri.go:89] found id: ""
	I0314 19:28:23.486201  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.486223  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:23.486242  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:23.486299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:23.525452  992344 cri.go:89] found id: ""
	I0314 19:28:23.525484  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.525492  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:23.525498  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:23.525553  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:23.560947  992344 cri.go:89] found id: ""
	I0314 19:28:23.560982  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.561037  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:23.561054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:23.561121  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:23.607261  992344 cri.go:89] found id: ""
	I0314 19:28:23.607298  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.607310  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:23.607317  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:23.607392  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:23.646849  992344 cri.go:89] found id: ""
	I0314 19:28:23.646881  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.646891  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:23.646896  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:23.646962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:23.684108  992344 cri.go:89] found id: ""
	I0314 19:28:23.684133  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.684140  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:23.684146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:23.684197  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:23.723284  992344 cri.go:89] found id: ""
	I0314 19:28:23.723320  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.723331  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:23.723343  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:23.723359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:23.785024  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:23.785066  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.801136  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:23.801167  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:23.875721  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:23.875749  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:23.875766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:23.969377  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:23.969420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:26.517771  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:26.533260  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:26.533349  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:26.573712  992344 cri.go:89] found id: ""
	I0314 19:28:26.573750  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.573762  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:26.573770  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:26.573846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:26.610738  992344 cri.go:89] found id: ""
	I0314 19:28:26.610768  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.610777  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:26.610783  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:26.610836  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:26.652014  992344 cri.go:89] found id: ""
	I0314 19:28:26.652041  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.652049  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:26.652054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:26.652109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:26.687344  992344 cri.go:89] found id: ""
	I0314 19:28:26.687377  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.687389  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:26.687398  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:26.687466  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:26.725897  992344 cri.go:89] found id: ""
	I0314 19:28:26.725926  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.725938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:26.725945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:26.726008  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:26.772328  992344 cri.go:89] found id: ""
	I0314 19:28:26.772357  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.772367  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:26.772375  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:26.772440  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:26.814721  992344 cri.go:89] found id: ""
	I0314 19:28:26.814757  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.814768  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:26.814776  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:26.814841  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:26.849726  992344 cri.go:89] found id: ""
	I0314 19:28:26.849763  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.849781  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:26.849794  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:26.849811  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:26.932680  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:26.932709  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:26.932725  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:27.011721  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:27.011787  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:27.059121  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:27.059160  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:27.110392  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:27.110430  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:29.625784  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:29.642945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:29.643024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:29.681233  992344 cri.go:89] found id: ""
	I0314 19:28:29.681267  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.681279  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:29.681286  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:29.681351  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:29.729735  992344 cri.go:89] found id: ""
	I0314 19:28:29.729764  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.729773  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:29.729779  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:29.729835  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:29.773873  992344 cri.go:89] found id: ""
	I0314 19:28:29.773902  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.773911  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:29.773918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:29.773973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:29.815982  992344 cri.go:89] found id: ""
	I0314 19:28:29.816009  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.816019  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:29.816025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:29.816102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:29.855295  992344 cri.go:89] found id: ""
	I0314 19:28:29.855328  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.855343  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:29.855349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:29.855404  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:29.893580  992344 cri.go:89] found id: ""
	I0314 19:28:29.893618  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.893630  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:29.893638  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:29.893705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:29.939721  992344 cri.go:89] found id: ""
	I0314 19:28:29.939752  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.939763  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:29.939770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:29.939837  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:29.978277  992344 cri.go:89] found id: ""
	I0314 19:28:29.978315  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.978328  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:29.978347  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:29.978362  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:30.031723  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:30.031761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:30.046940  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:30.046968  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:30.124190  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:30.124226  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:30.124244  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:30.203448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:30.203488  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:32.756750  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:32.772599  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:32.772679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:32.812033  992344 cri.go:89] found id: ""
	I0314 19:28:32.812061  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.812069  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:32.812076  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:32.812165  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:32.855461  992344 cri.go:89] found id: ""
	I0314 19:28:32.855490  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.855501  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:32.855509  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:32.855575  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:32.900644  992344 cri.go:89] found id: ""
	I0314 19:28:32.900675  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.900686  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:32.900694  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:32.900772  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:32.942120  992344 cri.go:89] found id: ""
	I0314 19:28:32.942155  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.942166  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:32.942175  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:32.942238  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:32.981325  992344 cri.go:89] found id: ""
	I0314 19:28:32.981352  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.981360  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:32.981367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:32.981419  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:33.019732  992344 cri.go:89] found id: ""
	I0314 19:28:33.019767  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.019781  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:33.019789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:33.019852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:33.060205  992344 cri.go:89] found id: ""
	I0314 19:28:33.060262  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.060274  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:33.060283  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:33.060350  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:33.100456  992344 cri.go:89] found id: ""
	I0314 19:28:33.100490  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.100517  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:33.100529  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:33.100548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:33.114637  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:33.114668  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:33.186983  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:33.187010  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:33.187024  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:33.268816  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:33.268856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:33.314600  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:33.314634  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:35.870832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:35.886088  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:35.886168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:35.929548  992344 cri.go:89] found id: ""
	I0314 19:28:35.929580  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.929590  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:35.929598  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:35.929675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:35.970315  992344 cri.go:89] found id: ""
	I0314 19:28:35.970351  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.970364  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:35.970372  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:35.970438  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:36.010663  992344 cri.go:89] found id: ""
	I0314 19:28:36.010696  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.010716  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:36.010723  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:36.010806  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:36.055521  992344 cri.go:89] found id: ""
	I0314 19:28:36.055558  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.055569  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:36.055578  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:36.055648  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:36.095768  992344 cri.go:89] found id: ""
	I0314 19:28:36.095799  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.095810  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:36.095821  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:36.095875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:36.132820  992344 cri.go:89] found id: ""
	I0314 19:28:36.132848  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.132856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:36.132861  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:36.132915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:36.173162  992344 cri.go:89] found id: ""
	I0314 19:28:36.173196  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.173209  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:36.173217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:36.173287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:36.211796  992344 cri.go:89] found id: ""
	I0314 19:28:36.211822  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.211830  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:36.211839  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:36.211854  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:36.271494  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:36.271536  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:36.289341  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:36.289366  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:36.368331  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:36.368361  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:36.368378  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:36.448945  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:36.448993  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:38.995675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:39.009626  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:39.009705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:39.051085  992344 cri.go:89] found id: ""
	I0314 19:28:39.051119  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.051128  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:39.051134  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:39.051184  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:39.090167  992344 cri.go:89] found id: ""
	I0314 19:28:39.090201  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.090214  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:39.090221  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:39.090293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:39.129345  992344 cri.go:89] found id: ""
	I0314 19:28:39.129388  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.129404  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:39.129411  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:39.129475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:39.166678  992344 cri.go:89] found id: ""
	I0314 19:28:39.166731  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.166741  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:39.166750  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:39.166822  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:39.206329  992344 cri.go:89] found id: ""
	I0314 19:28:39.206368  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.206381  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:39.206389  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:39.206442  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:39.245158  992344 cri.go:89] found id: ""
	I0314 19:28:39.245187  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.245196  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:39.245202  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:39.245253  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:39.289207  992344 cri.go:89] found id: ""
	I0314 19:28:39.289243  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.289259  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:39.289267  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:39.289335  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:39.327437  992344 cri.go:89] found id: ""
	I0314 19:28:39.327462  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.327472  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:39.327484  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:39.327500  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:39.381681  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:39.381724  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:39.397060  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:39.397097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:39.482718  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:39.482744  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:39.482761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:39.566304  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:39.566349  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.111937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:42.126968  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:42.127033  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:42.168671  992344 cri.go:89] found id: ""
	I0314 19:28:42.168701  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.168713  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:42.168721  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:42.168792  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:42.213285  992344 cri.go:89] found id: ""
	I0314 19:28:42.213311  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.213319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:42.213325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:42.213388  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:42.255036  992344 cri.go:89] found id: ""
	I0314 19:28:42.255075  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.255085  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:42.255090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:42.255159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:42.296863  992344 cri.go:89] found id: ""
	I0314 19:28:42.296896  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.296907  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:42.296915  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:42.296978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:42.338346  992344 cri.go:89] found id: ""
	I0314 19:28:42.338402  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.338413  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:42.338421  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:42.338489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:42.374667  992344 cri.go:89] found id: ""
	I0314 19:28:42.374691  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.374699  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:42.374711  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:42.374774  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:42.412676  992344 cri.go:89] found id: ""
	I0314 19:28:42.412702  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.412713  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:42.412721  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:42.412786  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:42.451093  992344 cri.go:89] found id: ""
	I0314 19:28:42.451125  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.451135  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:42.451147  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:42.451162  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:42.531130  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:42.531176  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.576583  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:42.576623  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:42.633675  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:42.633715  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:42.650154  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:42.650188  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:42.731282  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:45.231813  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:45.246939  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:45.247029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:45.289033  992344 cri.go:89] found id: ""
	I0314 19:28:45.289057  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.289066  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:45.289071  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:45.289128  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:45.327007  992344 cri.go:89] found id: ""
	I0314 19:28:45.327034  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.327043  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:45.327048  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:45.327109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:45.363725  992344 cri.go:89] found id: ""
	I0314 19:28:45.363757  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.363770  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:45.363778  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:45.363833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:45.400775  992344 cri.go:89] found id: ""
	I0314 19:28:45.400808  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.400819  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:45.400826  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:45.400887  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:45.438717  992344 cri.go:89] found id: ""
	I0314 19:28:45.438750  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.438762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:45.438770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:45.438833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:45.483296  992344 cri.go:89] found id: ""
	I0314 19:28:45.483334  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.483349  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:45.483355  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:45.483406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:45.519840  992344 cri.go:89] found id: ""
	I0314 19:28:45.519872  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.519881  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:45.519887  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:45.519939  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:45.560535  992344 cri.go:89] found id: ""
	I0314 19:28:45.560565  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.560577  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:45.560590  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:45.560613  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:45.639453  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:45.639476  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:45.639489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:45.724224  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:45.724265  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:45.768456  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:45.768494  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:45.828111  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:45.828154  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.345352  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:48.358823  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:48.358879  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:48.401545  992344 cri.go:89] found id: ""
	I0314 19:28:48.401575  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.401586  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:48.401595  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:48.401655  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:48.442031  992344 cri.go:89] found id: ""
	I0314 19:28:48.442062  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.442073  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:48.442081  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:48.442186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:48.481192  992344 cri.go:89] found id: ""
	I0314 19:28:48.481230  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.481239  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:48.481245  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:48.481309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:48.522127  992344 cri.go:89] found id: ""
	I0314 19:28:48.522162  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.522171  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:48.522177  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:48.522233  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:48.562763  992344 cri.go:89] found id: ""
	I0314 19:28:48.562791  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.562800  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:48.562806  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:48.562866  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:48.606256  992344 cri.go:89] found id: ""
	I0314 19:28:48.606290  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.606300  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:48.606309  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:48.606376  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:48.645493  992344 cri.go:89] found id: ""
	I0314 19:28:48.645527  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.645539  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:48.645547  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:48.645634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:48.686145  992344 cri.go:89] found id: ""
	I0314 19:28:48.686177  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.686189  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:48.686202  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:48.686229  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.701771  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:48.701812  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:48.783905  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:48.783931  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:48.783947  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:48.863824  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:48.863868  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:48.919421  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:48.919456  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:51.491562  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:51.507427  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:51.507494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:51.549290  992344 cri.go:89] found id: ""
	I0314 19:28:51.549325  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.549337  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:51.549344  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:51.549415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:51.587540  992344 cri.go:89] found id: ""
	I0314 19:28:51.587575  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.587588  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:51.587595  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:51.587663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:51.629187  992344 cri.go:89] found id: ""
	I0314 19:28:51.629221  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.629229  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:51.629235  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:51.629299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:51.670884  992344 cri.go:89] found id: ""
	I0314 19:28:51.670913  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.670921  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:51.670927  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:51.670978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:51.712751  992344 cri.go:89] found id: ""
	I0314 19:28:51.712783  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.712794  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:51.712802  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:51.712873  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:51.751462  992344 cri.go:89] found id: ""
	I0314 19:28:51.751490  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.751499  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:51.751505  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:51.751572  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:51.793049  992344 cri.go:89] found id: ""
	I0314 19:28:51.793079  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.793090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:51.793098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:51.793166  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:51.834793  992344 cri.go:89] found id: ""
	I0314 19:28:51.834825  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.834837  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:51.834850  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:51.834871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:51.851743  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:51.851792  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:51.927748  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:51.927768  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:51.927780  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:52.011674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:52.011718  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:52.067015  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:52.067059  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:54.623820  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:54.641380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:54.641459  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:54.699381  992344 cri.go:89] found id: ""
	I0314 19:28:54.699418  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.699430  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:54.699439  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:54.699507  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:54.752793  992344 cri.go:89] found id: ""
	I0314 19:28:54.752843  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.752865  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:54.752873  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:54.752980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:54.805116  992344 cri.go:89] found id: ""
	I0314 19:28:54.805148  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.805158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:54.805166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:54.805231  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:54.843303  992344 cri.go:89] found id: ""
	I0314 19:28:54.843336  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.843346  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:54.843352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:54.843406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:54.879789  992344 cri.go:89] found id: ""
	I0314 19:28:54.879822  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.879834  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:54.879840  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:54.879911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:54.921874  992344 cri.go:89] found id: ""
	I0314 19:28:54.921903  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.921913  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:54.921921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:54.922005  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:54.966098  992344 cri.go:89] found id: ""
	I0314 19:28:54.966129  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.966137  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:54.966146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:54.966201  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:55.005963  992344 cri.go:89] found id: ""
	I0314 19:28:55.005995  992344 logs.go:276] 0 containers: []
	W0314 19:28:55.006006  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:55.006019  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:55.006035  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:55.063802  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:55.063838  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:55.079126  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:55.079157  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:55.156174  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:55.156200  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:55.156241  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:55.237471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:55.237517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:57.786574  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:57.804359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:57.804446  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:57.843520  992344 cri.go:89] found id: ""
	I0314 19:28:57.843554  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.843566  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:57.843574  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:57.843642  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:57.883350  992344 cri.go:89] found id: ""
	I0314 19:28:57.883385  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.883398  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:57.883408  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:57.883502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:57.926544  992344 cri.go:89] found id: ""
	I0314 19:28:57.926578  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.926589  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:57.926597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:57.926674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:57.969832  992344 cri.go:89] found id: ""
	I0314 19:28:57.969861  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.969873  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:57.969880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:57.969951  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:58.021915  992344 cri.go:89] found id: ""
	I0314 19:28:58.021952  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.021964  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:58.021972  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:58.022043  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:58.068004  992344 cri.go:89] found id: ""
	I0314 19:28:58.068045  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.068059  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:58.068067  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:58.068147  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:58.109350  992344 cri.go:89] found id: ""
	I0314 19:28:58.109385  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.109397  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:58.109405  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:58.109474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:58.149505  992344 cri.go:89] found id: ""
	I0314 19:28:58.149600  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.149617  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:58.149631  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:58.149648  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:58.165051  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:58.165097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:58.260306  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:58.260334  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:58.260360  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:58.347229  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:58.347270  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:58.394506  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:58.394546  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:29:00.965332  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:00.982169  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:29:00.982254  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:29:01.023125  992344 cri.go:89] found id: ""
	I0314 19:29:01.023161  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.023174  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:29:01.023182  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:29:01.023258  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:29:01.073622  992344 cri.go:89] found id: ""
	I0314 19:29:01.073663  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.073688  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:29:01.073697  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:29:01.073762  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:29:01.128431  992344 cri.go:89] found id: ""
	I0314 19:29:01.128459  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.128468  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:29:01.128474  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:29:01.128538  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:29:01.175167  992344 cri.go:89] found id: ""
	I0314 19:29:01.175196  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.175214  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:29:01.175222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:29:01.175287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:29:01.219999  992344 cri.go:89] found id: ""
	I0314 19:29:01.220030  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.220041  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:29:01.220049  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:29:01.220114  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:29:01.267917  992344 cri.go:89] found id: ""
	I0314 19:29:01.267946  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.267954  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:29:01.267961  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:29:01.268010  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:29:01.308402  992344 cri.go:89] found id: ""
	I0314 19:29:01.308437  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.308450  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:29:01.308457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:29:01.308527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:29:01.354953  992344 cri.go:89] found id: ""
	I0314 19:29:01.354982  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.354991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:29:01.355001  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:29:01.355016  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:29:01.409088  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:29:01.409131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:29:01.424936  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:29:01.424965  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:29:01.517636  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:29:01.517673  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:29:01.517691  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:29:01.632674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:29:01.632731  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:29:04.185418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:04.199946  992344 kubeadm.go:591] duration metric: took 4m3.891459486s to restartPrimaryControlPlane
	W0314 19:29:04.200023  992344 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:04.200050  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:05.838695  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.638615727s)
	I0314 19:29:05.838799  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:05.858457  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:05.870547  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:05.881784  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:05.881805  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:05.881853  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:05.892847  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:05.892892  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:05.904430  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:05.914971  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:05.915037  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:05.925984  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.935559  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:05.935615  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.947405  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:05.958132  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:05.958177  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:05.968975  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:06.219425  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:31:02.414037  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:31:02.414153  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:31:02.415801  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:02.415891  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:02.415997  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:02.416110  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:02.416236  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:02.416324  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:02.418205  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:02.418304  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:02.418377  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:02.418455  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:02.418519  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:02.418629  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:02.418704  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:02.418793  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:02.418892  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:02.419018  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:02.419129  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:02.419184  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:02.419270  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:02.419347  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:02.419421  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:02.419528  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:02.419624  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:02.419808  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:02.419914  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:02.419951  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:02.420007  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:02.421520  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:02.421603  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:02.421669  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:02.421753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:02.421844  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:02.422023  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:02.422092  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:02.422167  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422353  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422458  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422731  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422812  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422970  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423032  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423228  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423333  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423479  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423488  992344 kubeadm.go:309] 
	I0314 19:31:02.423519  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:31:02.423552  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:31:02.423558  992344 kubeadm.go:309] 
	I0314 19:31:02.423601  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:31:02.423643  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:31:02.423770  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:31:02.423780  992344 kubeadm.go:309] 
	I0314 19:31:02.423912  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:31:02.423949  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:31:02.424001  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:31:02.424012  992344 kubeadm.go:309] 
	I0314 19:31:02.424141  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:31:02.424269  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:31:02.424280  992344 kubeadm.go:309] 
	I0314 19:31:02.424405  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:31:02.424481  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:31:02.424542  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:31:02.424606  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:31:02.424638  992344 kubeadm.go:309] 
	W0314 19:31:02.424800  992344 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 19:31:02.424887  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:31:03.827325  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.402406647s)
	I0314 19:31:03.827421  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:31:03.845125  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:31:03.856796  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:31:03.856821  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:31:03.856875  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:31:03.868304  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:31:03.868359  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:31:03.879608  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:31:03.891002  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:31:03.891068  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:31:03.902543  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.913159  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:31:03.913212  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.926194  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:31:03.937276  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:31:03.937344  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:31:03.949719  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:31:04.026772  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:04.026841  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:04.195658  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:04.195816  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:04.195973  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:04.416776  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:04.418845  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:04.418937  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:04.419023  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:04.419125  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:04.419222  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:04.419321  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:04.419386  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:04.419869  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:04.420376  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:04.420786  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:04.421265  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:04.421447  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:04.421551  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:04.472916  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:04.572160  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:04.802131  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:04.892115  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:04.908810  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:04.910191  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:04.910266  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:05.076124  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:05.078423  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:05.078564  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:05.083626  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:05.083753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:05.084096  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:05.088164  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:45.090977  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:45.091099  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:45.091378  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:50.091571  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:50.091787  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:00.093031  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:00.093312  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:20.094443  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:20.094650  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096632  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:33:00.096929  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096948  992344 kubeadm.go:309] 
	I0314 19:33:00.096986  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:33:00.097021  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:33:00.097030  992344 kubeadm.go:309] 
	I0314 19:33:00.097059  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:33:00.097088  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:33:00.097203  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:33:00.097228  992344 kubeadm.go:309] 
	I0314 19:33:00.097345  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:33:00.097394  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:33:00.097451  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:33:00.097461  992344 kubeadm.go:309] 
	I0314 19:33:00.097572  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:33:00.097673  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:33:00.097685  992344 kubeadm.go:309] 
	I0314 19:33:00.097865  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:33:00.098003  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:33:00.098105  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:33:00.098202  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:33:00.098221  992344 kubeadm.go:309] 
	I0314 19:33:00.098939  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:33:00.099069  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:33:00.099160  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:33:00.099254  992344 kubeadm.go:393] duration metric: took 7m59.845612375s to StartCluster
	I0314 19:33:00.099339  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:33:00.099422  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:33:00.151833  992344 cri.go:89] found id: ""
	I0314 19:33:00.151861  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.151869  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:33:00.151876  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:33:00.151943  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:33:00.196473  992344 cri.go:89] found id: ""
	I0314 19:33:00.196508  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.196519  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:33:00.196526  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:33:00.196595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:33:00.233150  992344 cri.go:89] found id: ""
	I0314 19:33:00.233193  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.233207  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:33:00.233217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:33:00.233292  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:33:00.273142  992344 cri.go:89] found id: ""
	I0314 19:33:00.273183  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.273196  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:33:00.273205  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:33:00.273274  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:33:00.311472  992344 cri.go:89] found id: ""
	I0314 19:33:00.311510  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.311523  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:33:00.311544  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:33:00.311618  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:33:00.352110  992344 cri.go:89] found id: ""
	I0314 19:33:00.352138  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.352146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:33:00.352152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:33:00.352230  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:33:00.399016  992344 cri.go:89] found id: ""
	I0314 19:33:00.399050  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.399060  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:33:00.399068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:33:00.399140  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:33:00.436808  992344 cri.go:89] found id: ""
	I0314 19:33:00.436844  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.436857  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:33:00.436871  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:33:00.436889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:33:00.487696  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:33:00.487732  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:33:00.503591  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:33:00.503624  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:33:00.586980  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:33:00.587014  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:33:00.587033  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:33:00.697747  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:33:00.697805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 19:33:00.767728  992344 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 19:33:00.767799  992344 out.go:239] * 
	* 
	W0314 19:33:00.768013  992344 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.768052  992344 out.go:239] * 
	* 
	W0314 19:33:00.769333  992344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:33:00.772897  992344 out.go:177] 
	W0314 19:33:00.774102  992344 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.774165  992344 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	* Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 19:33:00.774200  992344 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	* Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 19:33:00.775839  992344 out.go:177] 

                                                
                                                
** /stderr **
start_stop_delete_test.go:259: failed to start minikube post-stop. args "out/minikube-linux-amd64 start -p old-k8s-version-968094 --memory=2200 --alsologtostderr --wait=true --kvm-network=default --kvm-qemu-uri=qemu:///system --disable-driver-mounts --keep-context=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.20.0": exit status 109
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 2 (272.156211ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/SecondStart FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/SecondStart]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-968094 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-968094 logs -n 25: (1.53367277s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/SecondStart logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-578974 sudo                            | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-578974                                 | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:14 UTC |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-097195                           | kubernetes-upgrade-097195    | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:15 UTC |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-993602 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | disable-driver-mounts-993602                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:18 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-731976             | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-992669            | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC | 14 Mar 24 19:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-440341  | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC | 14 Mar 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-968094        | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-731976                  | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-992669                 | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-968094             | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-440341       | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:21 UTC | 14 Mar 24 19:30 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:21:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:21:06.641191  992563 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:21:06.641325  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641335  992563 out.go:304] Setting ErrFile to fd 2...
	I0314 19:21:06.641339  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641562  992563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:21:06.642133  992563 out.go:298] Setting JSON to false
	I0314 19:21:06.643097  992563 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":97419,"bootTime":1710346648,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:21:06.643154  992563 start.go:139] virtualization: kvm guest
	I0314 19:21:06.645619  992563 out.go:177] * [default-k8s-diff-port-440341] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:21:06.646948  992563 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:21:06.646951  992563 notify.go:220] Checking for updates...
	I0314 19:21:06.648183  992563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:21:06.649479  992563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:21:06.650646  992563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:21:06.651793  992563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:21:06.652871  992563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:21:06.654306  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:21:06.654679  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.654715  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.669822  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0314 19:21:06.670226  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.670730  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.670752  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.671113  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.671298  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.671562  992563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:21:06.671894  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.671955  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.686096  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
	I0314 19:21:06.686486  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.686930  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.686950  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.687304  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.687516  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.719775  992563 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 19:21:06.721100  992563 start.go:297] selected driver: kvm2
	I0314 19:21:06.721112  992563 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.721237  992563 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:21:06.722206  992563 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.722303  992563 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:21:06.737068  992563 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:21:06.737396  992563 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:21:06.737423  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:21:06.737430  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:21:06.737470  992563 start.go:340] cluster config:
	{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.737562  992563 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.739290  992563 out.go:177] * Starting "default-k8s-diff-port-440341" primary control-plane node in "default-k8s-diff-port-440341" cluster
	I0314 19:21:06.456441  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:06.740612  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:21:06.740639  992563 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 19:21:06.740649  992563 cache.go:56] Caching tarball of preloaded images
	I0314 19:21:06.740716  992563 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:21:06.740727  992563 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 19:21:06.740828  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:21:06.741044  992563 start.go:360] acquireMachinesLock for default-k8s-diff-port-440341: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:21:09.528474  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:15.608487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:18.680465  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:24.760483  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:27.832487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:33.912460  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:36.984446  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:43.064437  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:46.136461  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:52.216505  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:55.288457  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:01.368528  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:04.440444  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:10.520511  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:13.592559  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:19.672501  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:22.744517  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:28.824450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:31.896452  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:37.976513  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:41.048520  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:47.128498  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:50.200540  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:56.280558  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:59.352482  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:05.432488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:08.504481  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:14.584488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:17.656515  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:23.736418  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:26.808447  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:32.888521  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:35.960649  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:42.040524  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:45.112450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:51.192455  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:54.264715  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:00.344497  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:03.416432  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:06.421344  992056 start.go:364] duration metric: took 4m13.372196869s to acquireMachinesLock for "embed-certs-992669"
	I0314 19:24:06.421482  992056 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:06.421491  992056 fix.go:54] fixHost starting: 
	I0314 19:24:06.421996  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:06.422035  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:06.437799  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0314 19:24:06.438270  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:06.438847  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:24:06.438870  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:06.439255  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:06.439520  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:06.439648  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:24:06.441355  992056 fix.go:112] recreateIfNeeded on embed-certs-992669: state=Stopped err=<nil>
	I0314 19:24:06.441396  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	W0314 19:24:06.441578  992056 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:06.443265  992056 out.go:177] * Restarting existing kvm2 VM for "embed-certs-992669" ...
	I0314 19:24:06.444639  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Start
	I0314 19:24:06.444811  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring networks are active...
	I0314 19:24:06.445562  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network default is active
	I0314 19:24:06.445907  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network mk-embed-certs-992669 is active
	I0314 19:24:06.446291  992056 main.go:141] libmachine: (embed-certs-992669) Getting domain xml...
	I0314 19:24:06.446865  992056 main.go:141] libmachine: (embed-certs-992669) Creating domain...
	I0314 19:24:07.655936  992056 main.go:141] libmachine: (embed-certs-992669) Waiting to get IP...
	I0314 19:24:07.657162  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.657691  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.657795  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.657671  993021 retry.go:31] will retry after 279.188222ms: waiting for machine to come up
	I0314 19:24:07.938384  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.938890  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.938914  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.938842  993021 retry.go:31] will retry after 362.619543ms: waiting for machine to come up
	I0314 19:24:06.418272  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:06.418393  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.418709  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:24:06.418745  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.419028  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:24:06.421200  991880 machine.go:97] duration metric: took 4m37.410478688s to provisionDockerMachine
	I0314 19:24:06.421248  991880 fix.go:56] duration metric: took 4m37.431639776s for fixHost
	I0314 19:24:06.421257  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 4m37.431664509s
	W0314 19:24:06.421278  991880 start.go:713] error starting host: provision: host is not running
	W0314 19:24:06.421471  991880 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 19:24:06.421480  991880 start.go:728] Will try again in 5 seconds ...
	I0314 19:24:08.303564  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.304022  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.304049  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.303988  993021 retry.go:31] will retry after 299.406141ms: waiting for machine to come up
	I0314 19:24:08.605486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.605955  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.605983  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.605903  993021 retry.go:31] will retry after 438.174832ms: waiting for machine to come up
	I0314 19:24:09.045423  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.045943  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.045985  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.045874  993021 retry.go:31] will retry after 484.342881ms: waiting for machine to come up
	I0314 19:24:09.531525  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.531992  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.532032  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.531943  993021 retry.go:31] will retry after 680.030854ms: waiting for machine to come up
	I0314 19:24:10.213303  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:10.213760  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:10.213787  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:10.213714  993021 retry.go:31] will retry after 1.051377672s: waiting for machine to come up
	I0314 19:24:11.267112  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:11.267711  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:11.267736  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:11.267647  993021 retry.go:31] will retry after 1.45882013s: waiting for machine to come up
	I0314 19:24:12.729033  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:12.729529  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:12.729565  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:12.729476  993021 retry.go:31] will retry after 1.6586819s: waiting for machine to come up
	I0314 19:24:11.423018  991880 start.go:360] acquireMachinesLock for no-preload-731976: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:24:14.390304  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:14.390783  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:14.390813  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:14.390731  993021 retry.go:31] will retry after 1.484880543s: waiting for machine to come up
	I0314 19:24:15.877389  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:15.877877  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:15.877907  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:15.877817  993021 retry.go:31] will retry after 2.524223695s: waiting for machine to come up
	I0314 19:24:18.405110  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:18.405486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:18.405517  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:18.405433  993021 retry.go:31] will retry after 3.354970224s: waiting for machine to come up
	I0314 19:24:21.761886  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:21.762325  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:21.762374  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:21.762285  993021 retry.go:31] will retry after 3.996500899s: waiting for machine to come up
	I0314 19:24:27.129245  992344 start.go:364] duration metric: took 3m53.310661355s to acquireMachinesLock for "old-k8s-version-968094"
	I0314 19:24:27.129312  992344 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:27.129324  992344 fix.go:54] fixHost starting: 
	I0314 19:24:27.129726  992344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:27.129761  992344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:27.150444  992344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0314 19:24:27.150921  992344 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:27.151429  992344 main.go:141] libmachine: Using API Version  1
	I0314 19:24:27.151453  992344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:27.151859  992344 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:27.152058  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:27.152265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetState
	I0314 19:24:27.153847  992344 fix.go:112] recreateIfNeeded on old-k8s-version-968094: state=Stopped err=<nil>
	I0314 19:24:27.153876  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	W0314 19:24:27.154051  992344 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:27.156243  992344 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-968094" ...
	I0314 19:24:25.763430  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.763935  992056 main.go:141] libmachine: (embed-certs-992669) Found IP for machine: 192.168.50.213
	I0314 19:24:25.763962  992056 main.go:141] libmachine: (embed-certs-992669) Reserving static IP address...
	I0314 19:24:25.763974  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has current primary IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.764419  992056 main.go:141] libmachine: (embed-certs-992669) Reserved static IP address: 192.168.50.213
	I0314 19:24:25.764444  992056 main.go:141] libmachine: (embed-certs-992669) Waiting for SSH to be available...
	I0314 19:24:25.764467  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.764546  992056 main.go:141] libmachine: (embed-certs-992669) DBG | skip adding static IP to network mk-embed-certs-992669 - found existing host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"}
	I0314 19:24:25.764568  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Getting to WaitForSSH function...
	I0314 19:24:25.766675  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767018  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.767048  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767190  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH client type: external
	I0314 19:24:25.767237  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa (-rw-------)
	I0314 19:24:25.767278  992056 main.go:141] libmachine: (embed-certs-992669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:25.767299  992056 main.go:141] libmachine: (embed-certs-992669) DBG | About to run SSH command:
	I0314 19:24:25.767312  992056 main.go:141] libmachine: (embed-certs-992669) DBG | exit 0
	I0314 19:24:25.892385  992056 main.go:141] libmachine: (embed-certs-992669) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:25.892837  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetConfigRaw
	I0314 19:24:25.893525  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:25.895998  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896372  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.896411  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896708  992056 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/config.json ...
	I0314 19:24:25.896897  992056 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:25.896917  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:25.897155  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:25.899572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899856  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.899882  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:25.900241  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900594  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:25.900763  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:25.901166  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:25.901185  992056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:26.013286  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:26.013326  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013609  992056 buildroot.go:166] provisioning hostname "embed-certs-992669"
	I0314 19:24:26.013640  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013843  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.016614  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017006  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.017041  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.017397  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017596  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017746  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.017903  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.018131  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.018152  992056 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-992669 && echo "embed-certs-992669" | sudo tee /etc/hostname
	I0314 19:24:26.143977  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-992669
	
	I0314 19:24:26.144009  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.146661  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147021  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.147052  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147182  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.147387  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147542  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147677  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.147856  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.148037  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.148053  992056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-992669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-992669/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-992669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:26.266363  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:26.266400  992056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:26.266421  992056 buildroot.go:174] setting up certificates
	I0314 19:24:26.266430  992056 provision.go:84] configureAuth start
	I0314 19:24:26.266439  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.266755  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:26.269450  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269803  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.269833  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.272179  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272519  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.272572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272709  992056 provision.go:143] copyHostCerts
	I0314 19:24:26.272812  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:26.272823  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:26.272892  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:26.272992  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:26.273007  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:26.273034  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:26.273086  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:26.273093  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:26.273113  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:26.273199  992056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.embed-certs-992669 san=[127.0.0.1 192.168.50.213 embed-certs-992669 localhost minikube]
	I0314 19:24:26.424098  992056 provision.go:177] copyRemoteCerts
	I0314 19:24:26.424165  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:26.424193  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.426870  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427216  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.427293  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427367  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.427559  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.427745  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.427889  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.514935  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:26.542295  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0314 19:24:26.568557  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:24:26.595238  992056 provision.go:87] duration metric: took 328.794871ms to configureAuth
	I0314 19:24:26.595266  992056 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:26.595465  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:24:26.595587  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.598447  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598776  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.598810  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598958  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.599149  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599341  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599446  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.599576  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.599763  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.599784  992056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:26.883323  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:26.883363  992056 machine.go:97] duration metric: took 986.450882ms to provisionDockerMachine
	I0314 19:24:26.883378  992056 start.go:293] postStartSetup for "embed-certs-992669" (driver="kvm2")
	I0314 19:24:26.883393  992056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:26.883425  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:26.883799  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:26.883840  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.886707  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887088  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.887121  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887271  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.887471  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.887685  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.887842  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.972276  992056 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:26.977397  992056 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:26.977452  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:26.977557  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:26.977660  992056 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:26.977771  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:26.989997  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:27.015656  992056 start.go:296] duration metric: took 132.26294ms for postStartSetup
	I0314 19:24:27.015701  992056 fix.go:56] duration metric: took 20.594210437s for fixHost
	I0314 19:24:27.015723  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.018428  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018779  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.018820  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018934  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.019141  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019322  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019477  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.019663  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:27.019904  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:27.019918  992056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:27.129041  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444267.099898940
	
	I0314 19:24:27.129062  992056 fix.go:216] guest clock: 1710444267.099898940
	I0314 19:24:27.129070  992056 fix.go:229] Guest: 2024-03-14 19:24:27.09989894 +0000 UTC Remote: 2024-03-14 19:24:27.015704928 +0000 UTC m=+274.119026995 (delta=84.194012ms)
	I0314 19:24:27.129129  992056 fix.go:200] guest clock delta is within tolerance: 84.194012ms
	I0314 19:24:27.129134  992056 start.go:83] releasing machines lock for "embed-certs-992669", held for 20.707742604s
	I0314 19:24:27.129165  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.129445  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:27.132300  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132666  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.132696  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132891  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133513  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133729  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133832  992056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:27.133885  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.133989  992056 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:27.134020  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.136789  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137077  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137149  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137173  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137340  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.137694  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.137732  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137870  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.138017  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.138177  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.138423  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.241866  992056 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:27.248597  992056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:27.398034  992056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:27.404793  992056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:27.404866  992056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:27.425321  992056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:27.425347  992056 start.go:494] detecting cgroup driver to use...
	I0314 19:24:27.425441  992056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:27.446847  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:27.463193  992056 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:27.463248  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:27.477995  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:27.494158  992056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:27.626812  992056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:27.788432  992056 docker.go:233] disabling docker service ...
	I0314 19:24:27.788504  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:27.805552  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:27.820563  992056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:27.961941  992056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:28.083364  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:28.099491  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:28.121026  992056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:24:28.121100  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.133361  992056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:28.133445  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.145489  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.158112  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.171221  992056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:28.184604  992056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:28.196001  992056 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:28.196052  992056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:28.212800  992056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:28.225099  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:28.353741  992056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:28.497018  992056 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:28.497123  992056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:28.502406  992056 start.go:562] Will wait 60s for crictl version
	I0314 19:24:28.502464  992056 ssh_runner.go:195] Run: which crictl
	I0314 19:24:28.506848  992056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:28.546552  992056 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:28.546640  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.580646  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.613244  992056 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:24:27.157735  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .Start
	I0314 19:24:27.157923  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring networks are active...
	I0314 19:24:27.158602  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network default is active
	I0314 19:24:27.158940  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network mk-old-k8s-version-968094 is active
	I0314 19:24:27.159464  992344 main.go:141] libmachine: (old-k8s-version-968094) Getting domain xml...
	I0314 19:24:27.160230  992344 main.go:141] libmachine: (old-k8s-version-968094) Creating domain...
	I0314 19:24:28.397890  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting to get IP...
	I0314 19:24:28.398964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.399389  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.399455  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.399366  993151 retry.go:31] will retry after 254.808358ms: waiting for machine to come up
	I0314 19:24:28.655922  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.656383  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.656414  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.656329  993151 retry.go:31] will retry after 305.278558ms: waiting for machine to come up
	I0314 19:24:28.614866  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:28.618114  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618550  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:28.618595  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618875  992056 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:28.623905  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:28.637729  992056 kubeadm.go:877] updating cluster {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:28.637900  992056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:24:28.637976  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:28.679943  992056 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:24:28.680020  992056 ssh_runner.go:195] Run: which lz4
	I0314 19:24:28.684879  992056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:28.689966  992056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:28.690002  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:24:30.647436  992056 crio.go:444] duration metric: took 1.962590984s to copy over tarball
	I0314 19:24:30.647522  992056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:28.963796  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.964329  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.964360  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.964283  993151 retry.go:31] will retry after 405.241077ms: waiting for machine to come up
	I0314 19:24:29.371107  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.371677  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.371724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.371634  993151 retry.go:31] will retry after 392.618577ms: waiting for machine to come up
	I0314 19:24:29.766406  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.766893  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.766916  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.766848  993151 retry.go:31] will retry after 540.221203ms: waiting for machine to come up
	I0314 19:24:30.308703  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:30.309134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:30.309165  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:30.309075  993151 retry.go:31] will retry after 919.467685ms: waiting for machine to come up
	I0314 19:24:31.230536  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:31.231022  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:31.231055  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:31.230955  993151 retry.go:31] will retry after 1.096403831s: waiting for machine to come up
	I0314 19:24:32.329625  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:32.330123  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:32.330150  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:32.330079  993151 retry.go:31] will retry after 959.221478ms: waiting for machine to come up
	I0314 19:24:33.291448  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:33.291863  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:33.291896  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:33.291811  993151 retry.go:31] will retry after 1.719262878s: waiting for machine to come up
	I0314 19:24:33.418411  992056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.770860454s)
	I0314 19:24:33.418444  992056 crio.go:451] duration metric: took 2.770963996s to extract the tarball
	I0314 19:24:33.418458  992056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:33.461358  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:33.512360  992056 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:24:33.512392  992056 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:24:33.512403  992056 kubeadm.go:928] updating node { 192.168.50.213 8443 v1.28.4 crio true true} ...
	I0314 19:24:33.512647  992056 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-992669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:33.512740  992056 ssh_runner.go:195] Run: crio config
	I0314 19:24:33.572013  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:33.572042  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:33.572058  992056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:33.572089  992056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-992669 NodeName:embed-certs-992669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:24:33.572310  992056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-992669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:33.572391  992056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:24:33.583442  992056 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:33.583514  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:33.593833  992056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0314 19:24:33.611517  992056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:33.630287  992056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0314 19:24:33.649961  992056 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:33.654803  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:33.669018  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:33.787097  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:33.806023  992056 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669 for IP: 192.168.50.213
	I0314 19:24:33.806049  992056 certs.go:194] generating shared ca certs ...
	I0314 19:24:33.806076  992056 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:33.806256  992056 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:33.806310  992056 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:33.806325  992056 certs.go:256] generating profile certs ...
	I0314 19:24:33.806434  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/client.key
	I0314 19:24:33.806536  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key.c0728cf7
	I0314 19:24:33.806597  992056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key
	I0314 19:24:33.806759  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:33.806801  992056 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:33.806815  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:33.806850  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:33.806890  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:33.806919  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:33.806982  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:33.807845  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:33.856253  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:33.912784  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:33.954957  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:33.993293  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 19:24:34.037089  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0314 19:24:34.064883  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:34.091958  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:34.118801  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:24:34.145200  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:24:34.177627  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:34.205768  992056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:24:34.228516  992056 ssh_runner.go:195] Run: openssl version
	I0314 19:24:34.236753  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:24:34.251464  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257801  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257854  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.264945  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:24:34.277068  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:24:34.289085  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294602  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294670  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.301147  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:24:34.313131  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:24:34.324658  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329681  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329741  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.336033  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:24:34.347545  992056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:24:34.352395  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:24:34.358770  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:24:34.364979  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:24:34.371983  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:24:34.378320  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:24:34.385155  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:24:34.392023  992056 kubeadm.go:391] StartCluster: {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:24:34.392123  992056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:24:34.392163  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.431071  992056 cri.go:89] found id: ""
	I0314 19:24:34.431146  992056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:24:34.442517  992056 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:24:34.442537  992056 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:24:34.442543  992056 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:24:34.442591  992056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:24:34.452897  992056 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:24:34.453878  992056 kubeconfig.go:125] found "embed-certs-992669" server: "https://192.168.50.213:8443"
	I0314 19:24:34.456056  992056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:24:34.466222  992056 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.213
	I0314 19:24:34.466280  992056 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:24:34.466297  992056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:24:34.466350  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.514040  992056 cri.go:89] found id: ""
	I0314 19:24:34.514150  992056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:24:34.532904  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:24:34.543553  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:24:34.543572  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:24:34.543621  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:24:34.553476  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:24:34.553537  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:24:34.564032  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:24:34.573782  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:24:34.573880  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:24:34.584510  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.595906  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:24:34.595970  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.610866  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:24:34.623752  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:24:34.623808  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:24:34.634364  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:24:34.645735  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:34.774124  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.518494  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.777109  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.873101  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.991242  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:24:35.991340  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.491712  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.991589  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:37.035324  992056 api_server.go:72] duration metric: took 1.044079871s to wait for apiserver process to appear ...
	I0314 19:24:37.035360  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:24:37.035414  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:37.036045  992056 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0314 19:24:37.535727  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:35.013374  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:35.013750  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:35.013781  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:35.013702  993151 retry.go:31] will retry after 1.413824554s: waiting for machine to come up
	I0314 19:24:36.429118  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:36.429704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:36.429738  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:36.429643  993151 retry.go:31] will retry after 2.349477476s: waiting for machine to come up
	I0314 19:24:40.106309  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.106348  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.106381  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.155310  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.155352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.535833  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.544840  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:40.544869  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.036483  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.049323  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:41.049352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.536465  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.542411  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:24:41.550034  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:24:41.550066  992056 api_server.go:131] duration metric: took 4.514697227s to wait for apiserver health ...
	I0314 19:24:41.550078  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:41.550086  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:41.551967  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:24:41.553380  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:24:41.564892  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:24:41.585838  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:24:41.600993  992056 system_pods.go:59] 8 kube-system pods found
	I0314 19:24:41.601025  992056 system_pods.go:61] "coredns-5dd5756b68-jpsr6" [80728635-786f-442e-80be-811e3292128b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:24:41.601037  992056 system_pods.go:61] "etcd-embed-certs-992669" [4bd7ff48-fe02-4b55-b1f5-cf195efae581] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:24:41.601043  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [2a5f81e9-4943-47d9-a705-e91b802bd506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:24:41.601052  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [50904b48-cbc6-494c-8ed0-ef558f20513c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:24:41.601057  992056 system_pods.go:61] "kube-proxy-nsgs6" [d26d8d3f-04ca-4f68-9016-48552bcdc2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 19:24:41.601062  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [bf535a02-78be-44b0-8ebb-338754867930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:24:41.601067  992056 system_pods.go:61] "metrics-server-57f55c9bc5-w8cj6" [398e104c-24c4-45db-94fb-44188cfa85a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:24:41.601071  992056 system_pods.go:61] "storage-provisioner" [66abcc06-9867-4617-afc1-3fa370940f80] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:24:41.601077  992056 system_pods.go:74] duration metric: took 15.215121ms to wait for pod list to return data ...
	I0314 19:24:41.601085  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:24:41.606110  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:24:41.606135  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:24:41.606146  992056 node_conditions.go:105] duration metric: took 5.056699ms to run NodePressure ...
	I0314 19:24:41.606163  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:41.842508  992056 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850325  992056 kubeadm.go:733] kubelet initialised
	I0314 19:24:41.850344  992056 kubeadm.go:734] duration metric: took 7.804586ms waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850352  992056 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:24:41.857067  992056 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.861933  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861954  992056 pod_ready.go:81] duration metric: took 4.862588ms for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.861963  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861971  992056 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.869015  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869044  992056 pod_ready.go:81] duration metric: took 7.059854ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.869055  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869063  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.877475  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877499  992056 pod_ready.go:81] duration metric: took 8.426466ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.877517  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877525  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.989806  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989836  992056 pod_ready.go:81] duration metric: took 112.302268ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.989846  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989852  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390883  992056 pod_ready.go:92] pod "kube-proxy-nsgs6" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:42.390916  992056 pod_ready.go:81] duration metric: took 401.05393ms for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390929  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:38.781555  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:38.782105  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:38.782134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:38.782060  993151 retry.go:31] will retry after 3.062702235s: waiting for machine to come up
	I0314 19:24:41.846373  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:41.846889  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:41.846928  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:41.846822  993151 retry.go:31] will retry after 3.245094913s: waiting for machine to come up
	I0314 19:24:44.397857  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:46.400091  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:45.093425  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:45.093821  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:45.093848  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:45.093766  993151 retry.go:31] will retry after 4.695140566s: waiting for machine to come up
	I0314 19:24:51.181742  992563 start.go:364] duration metric: took 3m44.440656871s to acquireMachinesLock for "default-k8s-diff-port-440341"
	I0314 19:24:51.181827  992563 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:51.181839  992563 fix.go:54] fixHost starting: 
	I0314 19:24:51.182279  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:51.182325  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:51.202636  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0314 19:24:51.203153  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:51.203703  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:24:51.203732  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:51.204197  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:51.204404  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:24:51.204622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:24:51.206147  992563 fix.go:112] recreateIfNeeded on default-k8s-diff-port-440341: state=Stopped err=<nil>
	I0314 19:24:51.206184  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	W0314 19:24:51.206365  992563 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:51.208359  992563 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-440341" ...
	I0314 19:24:51.209719  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Start
	I0314 19:24:51.209912  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring networks are active...
	I0314 19:24:51.210618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network default is active
	I0314 19:24:51.210996  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network mk-default-k8s-diff-port-440341 is active
	I0314 19:24:51.211386  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Getting domain xml...
	I0314 19:24:51.212126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Creating domain...
	I0314 19:24:49.791977  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792478  992344 main.go:141] libmachine: (old-k8s-version-968094) Found IP for machine: 192.168.72.211
	I0314 19:24:49.792509  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has current primary IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792519  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserving static IP address...
	I0314 19:24:49.792964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.792995  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserved static IP address: 192.168.72.211
	I0314 19:24:49.793028  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | skip adding static IP to network mk-old-k8s-version-968094 - found existing host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"}
	I0314 19:24:49.793049  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Getting to WaitForSSH function...
	I0314 19:24:49.793060  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting for SSH to be available...
	I0314 19:24:49.795809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796119  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.796155  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796340  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH client type: external
	I0314 19:24:49.796365  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa (-rw-------)
	I0314 19:24:49.796399  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:49.796418  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | About to run SSH command:
	I0314 19:24:49.796437  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | exit 0
	I0314 19:24:49.928364  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:49.928849  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:24:49.929565  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:49.932065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932543  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.932575  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932818  992344 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json ...
	I0314 19:24:49.933027  992344 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:49.933049  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:49.933280  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:49.935870  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936260  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.936292  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936447  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:49.936649  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936821  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936940  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:49.937112  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:49.937318  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:49.937331  992344 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:50.053144  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:50.053184  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053461  992344 buildroot.go:166] provisioning hostname "old-k8s-version-968094"
	I0314 19:24:50.053495  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053715  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.056663  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057034  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.057061  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.057486  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057647  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057775  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.057990  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.058167  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.058181  992344 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-968094 && echo "old-k8s-version-968094" | sudo tee /etc/hostname
	I0314 19:24:50.190002  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-968094
	
	I0314 19:24:50.190030  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.192892  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193306  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.193343  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.193825  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194002  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194128  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.194298  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.194472  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.194493  992344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-968094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-968094/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-968094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:50.322939  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:50.322975  992344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:50.323003  992344 buildroot.go:174] setting up certificates
	I0314 19:24:50.323016  992344 provision.go:84] configureAuth start
	I0314 19:24:50.323026  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.323344  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:50.326376  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.326798  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.326827  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.327082  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.329704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.329994  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.330026  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.330131  992344 provision.go:143] copyHostCerts
	I0314 19:24:50.330206  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:50.330223  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:50.330299  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:50.330426  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:50.330435  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:50.330472  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:50.330549  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:50.330560  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:50.330584  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:50.330649  992344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-968094 san=[127.0.0.1 192.168.72.211 localhost minikube old-k8s-version-968094]
	I0314 19:24:50.471374  992344 provision.go:177] copyRemoteCerts
	I0314 19:24:50.471438  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:50.471469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.474223  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474570  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.474608  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474773  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.474969  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.475149  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.475261  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:50.563859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:24:50.593259  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:50.624146  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 19:24:50.651113  992344 provision.go:87] duration metric: took 328.081801ms to configureAuth
	I0314 19:24:50.651158  992344 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:50.651348  992344 config.go:182] Loaded profile config "old-k8s-version-968094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:24:50.651445  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.654716  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.655096  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655328  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.655552  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655730  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655870  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.656012  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.656191  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.656223  992344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:50.925456  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:50.925492  992344 machine.go:97] duration metric: took 992.449429ms to provisionDockerMachine
	I0314 19:24:50.925508  992344 start.go:293] postStartSetup for "old-k8s-version-968094" (driver="kvm2")
	I0314 19:24:50.925518  992344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:50.925535  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:50.925909  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:50.925957  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.928724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929100  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.929124  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929292  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.929469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.929606  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.929718  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.020664  992344 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:51.025418  992344 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:51.025449  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:51.025530  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:51.025642  992344 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:51.025732  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:51.036808  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:51.062597  992344 start.go:296] duration metric: took 137.076655ms for postStartSetup
	I0314 19:24:51.062641  992344 fix.go:56] duration metric: took 23.933315476s for fixHost
	I0314 19:24:51.062667  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.065408  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.065766  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.065809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.066008  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.066241  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066426  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.066751  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:51.066923  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:51.066934  992344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:51.181564  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444291.127685902
	
	I0314 19:24:51.181593  992344 fix.go:216] guest clock: 1710444291.127685902
	I0314 19:24:51.181604  992344 fix.go:229] Guest: 2024-03-14 19:24:51.127685902 +0000 UTC Remote: 2024-03-14 19:24:51.062645814 +0000 UTC m=+257.398231189 (delta=65.040088ms)
	I0314 19:24:51.181630  992344 fix.go:200] guest clock delta is within tolerance: 65.040088ms
	I0314 19:24:51.181636  992344 start.go:83] releasing machines lock for "old-k8s-version-968094", held for 24.052354261s
	I0314 19:24:51.181662  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.181979  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:51.185086  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185444  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.185482  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185683  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186150  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186369  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186475  992344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:51.186530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.186600  992344 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:51.186628  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.189328  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189665  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189739  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.189769  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189909  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190069  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.190091  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.190096  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190278  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190372  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190419  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.190530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190693  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190870  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.273691  992344 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:51.304581  992344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:51.462596  992344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:51.469505  992344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:51.469580  992344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:51.488042  992344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:51.488064  992344 start.go:494] detecting cgroup driver to use...
	I0314 19:24:51.488127  992344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:51.506331  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:51.521263  992344 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:51.521310  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:51.535346  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:51.554784  992344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:51.695072  992344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:51.861752  992344 docker.go:233] disabling docker service ...
	I0314 19:24:51.861822  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:51.886279  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:51.908899  992344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:52.059911  992344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:52.216861  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:52.236554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:52.262549  992344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 19:24:52.262629  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.277311  992344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:52.277405  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.292485  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.307327  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.323517  992344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:52.337431  992344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:52.350647  992344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:52.350744  992344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:52.371679  992344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:52.384810  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:52.540285  992344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:52.710717  992344 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:52.710812  992344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:52.716025  992344 start.go:562] Will wait 60s for crictl version
	I0314 19:24:52.716079  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:52.720670  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:52.760376  992344 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:52.760453  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.795912  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.829365  992344 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 19:24:48.899626  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:50.899777  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:52.830745  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:52.834322  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.834813  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:52.834846  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.835148  992344 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:52.840664  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:52.855935  992344 kubeadm.go:877] updating cluster {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:52.856085  992344 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:24:52.856143  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:52.917316  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:52.917384  992344 ssh_runner.go:195] Run: which lz4
	I0314 19:24:52.923732  992344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:52.929018  992344 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:52.929045  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 19:24:52.555382  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting to get IP...
	I0314 19:24:52.556296  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556767  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556831  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.556746  993275 retry.go:31] will retry after 250.179074ms: waiting for machine to come up
	I0314 19:24:52.808339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.808989  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.809024  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.808935  993275 retry.go:31] will retry after 257.317639ms: waiting for machine to come up
	I0314 19:24:53.068134  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068762  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.068737  993275 retry.go:31] will retry after 427.477171ms: waiting for machine to come up
	I0314 19:24:53.498274  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498751  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498783  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.498710  993275 retry.go:31] will retry after 414.04038ms: waiting for machine to come up
	I0314 19:24:53.914418  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.914970  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.915003  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.914922  993275 retry.go:31] will retry after 698.808984ms: waiting for machine to come up
	I0314 19:24:54.616167  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616671  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616733  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:54.616625  993275 retry.go:31] will retry after 627.573493ms: waiting for machine to come up
	I0314 19:24:55.245579  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246152  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246193  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:55.246077  993275 retry.go:31] will retry after 827.444645ms: waiting for machine to come up
	I0314 19:24:56.075132  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075586  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:56.075577  993275 retry.go:31] will retry after 1.317575549s: waiting for machine to come up
	I0314 19:24:53.400660  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:55.906584  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:56.899301  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:56.899342  992056 pod_ready.go:81] duration metric: took 14.508394033s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:56.899353  992056 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:55.007168  992344 crio.go:444] duration metric: took 2.08347164s to copy over tarball
	I0314 19:24:55.007258  992344 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:58.484792  992344 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.477465904s)
	I0314 19:24:58.484844  992344 crio.go:451] duration metric: took 3.47764437s to extract the tarball
	I0314 19:24:58.484855  992344 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:58.531628  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:58.586436  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:58.586467  992344 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.586644  992344 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.586686  992344 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.586732  992344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.586795  992344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588701  992344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.588708  992344 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 19:24:58.588712  992344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.588743  992344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.588773  992344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.588717  992344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:57.395510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.395966  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.396012  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:57.395926  993275 retry.go:31] will retry after 1.349742787s: waiting for machine to come up
	I0314 19:24:58.747273  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747764  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:58.747711  993275 retry.go:31] will retry after 1.715984886s: waiting for machine to come up
	I0314 19:25:00.465630  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466197  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:00.466159  993275 retry.go:31] will retry after 2.291989797s: waiting for machine to come up
	I0314 19:24:58.949160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:01.407335  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:58.745061  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.748854  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.755753  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.757595  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.776672  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.785641  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.803868  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 19:24:58.878866  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:59.049142  992344 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 19:24:59.049192  992344 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 19:24:59.049238  992344 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 19:24:59.049245  992344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 19:24:59.049206  992344 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.049275  992344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.049297  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058360  992344 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 19:24:59.058394  992344 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.058429  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058471  992344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 19:24:59.058508  992344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.058530  992344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 19:24:59.058550  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058560  992344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.058580  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058506  992344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 19:24:59.058620  992344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.058668  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.179879  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 19:24:59.179903  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.179964  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.180018  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.180048  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.180057  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.180158  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.353654  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 19:24:59.353726  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 19:24:59.353834  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 19:24:59.353886  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 19:24:59.353951  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 19:24:59.353992  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 19:24:59.356778  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 19:24:59.356828  992344 cache_images.go:92] duration metric: took 770.342451ms to LoadCachedImages
	W0314 19:24:59.356913  992344 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0314 19:24:59.356940  992344 kubeadm.go:928] updating node { 192.168.72.211 8443 v1.20.0 crio true true} ...
	I0314 19:24:59.357079  992344 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-968094 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:59.357158  992344 ssh_runner.go:195] Run: crio config
	I0314 19:24:59.412340  992344 cni.go:84] Creating CNI manager for ""
	I0314 19:24:59.412369  992344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:59.412383  992344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:59.412401  992344 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-968094 NodeName:old-k8s-version-968094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 19:24:59.412538  992344 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-968094"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:59.412599  992344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 19:24:59.424508  992344 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:59.424568  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:59.435744  992344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0314 19:24:59.456291  992344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:59.476542  992344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0314 19:24:59.496114  992344 ssh_runner.go:195] Run: grep 192.168.72.211	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:59.500824  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:59.515178  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:59.658035  992344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:59.677735  992344 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094 for IP: 192.168.72.211
	I0314 19:24:59.677764  992344 certs.go:194] generating shared ca certs ...
	I0314 19:24:59.677788  992344 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:59.677986  992344 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:59.678055  992344 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:59.678073  992344 certs.go:256] generating profile certs ...
	I0314 19:24:59.678209  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key
	I0314 19:24:59.678288  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff
	I0314 19:24:59.678358  992344 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key
	I0314 19:24:59.678538  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:59.678589  992344 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:59.678602  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:59.678684  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:59.678751  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:59.678787  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:59.678858  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:59.679859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:59.720965  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:59.758643  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:59.791205  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:59.832034  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 19:24:59.864634  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:24:59.912167  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:59.941168  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:59.969896  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:59.998999  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:00.029688  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:00.062406  992344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:00.083876  992344 ssh_runner.go:195] Run: openssl version
	I0314 19:25:00.091083  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:00.104196  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110057  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110152  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.117863  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:00.130915  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:00.144184  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149849  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149905  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.156267  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:00.168884  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:00.181228  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186741  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186815  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.193408  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:00.206565  992344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:00.211955  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:00.218803  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:00.226004  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:00.233071  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:00.239998  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:00.246935  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:00.253650  992344 kubeadm.go:391] StartCluster: {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:00.253770  992344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:00.253810  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.296620  992344 cri.go:89] found id: ""
	I0314 19:25:00.296698  992344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:00.308438  992344 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:00.308468  992344 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:00.308474  992344 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:00.308525  992344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:00.319200  992344 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:00.320258  992344 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-968094" does not appear in /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:25:00.320949  992344 kubeconfig.go:62] /home/jenkins/minikube-integration/18384-942544/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-968094" cluster setting kubeconfig missing "old-k8s-version-968094" context setting]
	I0314 19:25:00.321954  992344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:00.323826  992344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:00.334959  992344 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.211
	I0314 19:25:00.334999  992344 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:00.335015  992344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:00.335094  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.382418  992344 cri.go:89] found id: ""
	I0314 19:25:00.382504  992344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:00.400714  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:00.411916  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:00.411941  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:00.412000  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:00.421737  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:00.421786  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:00.431760  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:00.441154  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:00.441196  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:00.450820  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.460234  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:00.460286  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.470870  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:00.480352  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:00.480410  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:00.490282  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:00.500774  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:00.627719  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.640607  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.012840431s)
	I0314 19:25:01.640641  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.916817  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.028420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.119081  992344 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:02.119190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.619675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.119328  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.620344  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.761090  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761739  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:02.761611  993275 retry.go:31] will retry after 3.350017146s: waiting for machine to come up
	I0314 19:25:06.113637  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114139  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:06.114067  993275 retry.go:31] will retry after 2.99017798s: waiting for machine to come up
	I0314 19:25:03.407892  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:05.907001  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:04.120088  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:04.619514  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.119530  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.119991  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.619382  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.119301  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.620072  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.119582  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.619828  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.105563  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106118  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106171  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:09.105987  993275 retry.go:31] will retry after 5.42931998s: waiting for machine to come up
	I0314 19:25:08.406736  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:10.906160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:09.119659  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.619483  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.119624  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.619745  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.120056  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.619647  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.120231  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.619400  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.120340  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.620046  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.061441  991880 start.go:364] duration metric: took 1m4.63836278s to acquireMachinesLock for "no-preload-731976"
	I0314 19:25:16.061504  991880 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:25:16.061513  991880 fix.go:54] fixHost starting: 
	I0314 19:25:16.061978  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:25:16.062021  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:25:16.079752  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0314 19:25:16.080283  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:25:16.080930  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:25:16.080964  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:25:16.081279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:25:16.081477  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:16.081630  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:25:16.083170  991880 fix.go:112] recreateIfNeeded on no-preload-731976: state=Stopped err=<nil>
	I0314 19:25:16.083196  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	W0314 19:25:16.083368  991880 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:25:16.085486  991880 out.go:177] * Restarting existing kvm2 VM for "no-preload-731976" ...
	I0314 19:25:14.539116  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has current primary IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Found IP for machine: 192.168.61.88
	I0314 19:25:14.539650  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserving static IP address...
	I0314 19:25:14.540057  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.540081  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserved static IP address: 192.168.61.88
	I0314 19:25:14.540105  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | skip adding static IP to network mk-default-k8s-diff-port-440341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"}
	I0314 19:25:14.540126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Getting to WaitForSSH function...
	I0314 19:25:14.540172  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for SSH to be available...
	I0314 19:25:14.542249  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542558  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.542594  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542722  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH client type: external
	I0314 19:25:14.542755  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa (-rw-------)
	I0314 19:25:14.542793  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:14.542810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | About to run SSH command:
	I0314 19:25:14.542841  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | exit 0
	I0314 19:25:14.668392  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:14.668820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetConfigRaw
	I0314 19:25:14.669583  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:14.672181  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672581  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.672622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672861  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:25:14.673049  992563 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:14.673069  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:14.673317  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.675826  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676173  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.676204  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.676547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676702  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.676969  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.677212  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.677229  992563 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:14.780979  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:14.781005  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781243  992563 buildroot.go:166] provisioning hostname "default-k8s-diff-port-440341"
	I0314 19:25:14.781272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781508  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.784454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.784868  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.784897  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.785044  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.785241  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785410  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.785731  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.786010  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.786038  992563 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-440341 && echo "default-k8s-diff-port-440341" | sudo tee /etc/hostname
	I0314 19:25:14.904629  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-440341
	
	I0314 19:25:14.904677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.907677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908043  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.908065  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908308  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.908510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908709  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908895  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.909075  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.909242  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.909260  992563 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-440341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-440341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-440341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:15.027592  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:15.027627  992563 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:15.027663  992563 buildroot.go:174] setting up certificates
	I0314 19:25:15.027676  992563 provision.go:84] configureAuth start
	I0314 19:25:15.027686  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:15.027992  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:15.031259  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031691  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.031723  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031839  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.034341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034690  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.034727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034882  992563 provision.go:143] copyHostCerts
	I0314 19:25:15.034957  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:15.034974  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:15.035032  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:15.035117  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:15.035126  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:15.035150  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:15.035219  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:15.035240  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:15.035276  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:15.035368  992563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-440341 san=[127.0.0.1 192.168.61.88 default-k8s-diff-port-440341 localhost minikube]
	I0314 19:25:15.366505  992563 provision.go:177] copyRemoteCerts
	I0314 19:25:15.366572  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:15.366601  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.369547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.369931  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.369968  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.370178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.370389  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.370559  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.370668  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.451879  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:15.479025  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 19:25:15.505498  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:15.531616  992563 provision.go:87] duration metric: took 503.926667ms to configureAuth
	I0314 19:25:15.531643  992563 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:15.531808  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:25:15.531887  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.534449  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534774  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.534805  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534957  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.535182  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535344  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535479  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.535660  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.535863  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.535895  992563 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:15.820304  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:15.820329  992563 machine.go:97] duration metric: took 1.147267075s to provisionDockerMachine
	I0314 19:25:15.820361  992563 start.go:293] postStartSetup for "default-k8s-diff-port-440341" (driver="kvm2")
	I0314 19:25:15.820373  992563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:15.820407  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:15.820799  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:15.820845  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.823575  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.823941  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.823987  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.824114  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.824357  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.824550  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.824671  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.908341  992563 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:15.913846  992563 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:15.913876  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:15.913955  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:15.914034  992563 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:15.914122  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:15.925105  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:15.954205  992563 start.go:296] duration metric: took 133.827027ms for postStartSetup
	I0314 19:25:15.954258  992563 fix.go:56] duration metric: took 24.772420326s for fixHost
	I0314 19:25:15.954282  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.957262  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957609  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.957635  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957844  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.958095  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.958685  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.958877  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.958890  992563 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:16.061284  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444316.007193080
	
	I0314 19:25:16.061311  992563 fix.go:216] guest clock: 1710444316.007193080
	I0314 19:25:16.061318  992563 fix.go:229] Guest: 2024-03-14 19:25:16.00719308 +0000 UTC Remote: 2024-03-14 19:25:15.954262263 +0000 UTC m=+249.360732976 (delta=52.930817ms)
	I0314 19:25:16.061337  992563 fix.go:200] guest clock delta is within tolerance: 52.930817ms
	I0314 19:25:16.061342  992563 start.go:83] releasing machines lock for "default-k8s-diff-port-440341", held for 24.879556185s
	I0314 19:25:16.061371  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.061696  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:16.064827  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065187  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.065222  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065419  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.065929  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066138  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066251  992563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:16.066313  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.066422  992563 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:16.066451  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.069082  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069202  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069488  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069518  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069624  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069659  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.069727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069881  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.069946  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.070091  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070106  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.070265  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.070283  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070420  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.149620  992563 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:16.178081  992563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:16.329236  992563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:16.337073  992563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:16.337165  992563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:16.364829  992563 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:16.364860  992563 start.go:494] detecting cgroup driver to use...
	I0314 19:25:16.364950  992563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:16.381277  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:16.396677  992563 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:16.396790  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:16.415438  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:16.434001  992563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:16.557750  992563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:16.705623  992563 docker.go:233] disabling docker service ...
	I0314 19:25:16.705722  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:16.724795  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:16.740336  992563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:16.886850  992563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:17.053349  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:17.069592  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:17.094552  992563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:17.094625  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.110947  992563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:17.111007  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.126320  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.146601  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.159826  992563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:17.173155  992563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:17.184494  992563 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:17.184558  992563 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:17.208695  992563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:17.227381  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:17.368355  992563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:17.520886  992563 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:17.520974  992563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:17.526580  992563 start.go:562] Will wait 60s for crictl version
	I0314 19:25:17.526628  992563 ssh_runner.go:195] Run: which crictl
	I0314 19:25:17.531219  992563 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:17.575983  992563 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:17.576094  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.609997  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.649005  992563 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:25:13.406397  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:15.407636  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:17.409791  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:14.119937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:14.619997  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.120018  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.620272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.119409  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.619421  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.120049  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.619392  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.120272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.619832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.086761  991880 main.go:141] libmachine: (no-preload-731976) Calling .Start
	I0314 19:25:16.086939  991880 main.go:141] libmachine: (no-preload-731976) Ensuring networks are active...
	I0314 19:25:16.087657  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network default is active
	I0314 19:25:16.088038  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network mk-no-preload-731976 is active
	I0314 19:25:16.088466  991880 main.go:141] libmachine: (no-preload-731976) Getting domain xml...
	I0314 19:25:16.089244  991880 main.go:141] libmachine: (no-preload-731976) Creating domain...
	I0314 19:25:17.372280  991880 main.go:141] libmachine: (no-preload-731976) Waiting to get IP...
	I0314 19:25:17.373197  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.373612  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.373682  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.373595  993471 retry.go:31] will retry after 247.546207ms: waiting for machine to come up
	I0314 19:25:17.622973  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.623491  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.623521  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.623426  993471 retry.go:31] will retry after 340.11253ms: waiting for machine to come up
	I0314 19:25:17.964912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.965367  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.965409  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.965326  993471 retry.go:31] will retry after 467.934923ms: waiting for machine to come up
	I0314 19:25:18.434872  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.435488  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.435532  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.435428  993471 retry.go:31] will retry after 407.906998ms: waiting for machine to come up
	I0314 19:25:18.845093  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.845593  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.845624  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.845538  993471 retry.go:31] will retry after 461.594471ms: waiting for machine to come up
	I0314 19:25:17.650252  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:17.653280  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:17.653706  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653907  992563 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:17.660311  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:17.676122  992563 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:17.676277  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:25:17.676348  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:17.718920  992563 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:25:17.718999  992563 ssh_runner.go:195] Run: which lz4
	I0314 19:25:17.724064  992563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:25:17.729236  992563 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:25:17.729268  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:25:19.779405  992563 crio.go:444] duration metric: took 2.055391829s to copy over tarball
	I0314 19:25:19.779494  992563 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:25:19.411247  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:21.911525  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:19.120147  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.619419  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.119333  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.620029  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.119402  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.620236  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.119692  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.120125  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.620104  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.309335  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.309861  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.309892  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.309812  993471 retry.go:31] will retry after 629.96532ms: waiting for machine to come up
	I0314 19:25:19.941554  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.942052  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.942086  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.942018  993471 retry.go:31] will retry after 1.025753706s: waiting for machine to come up
	I0314 19:25:20.969178  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:20.969734  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:20.969775  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:20.969671  993471 retry.go:31] will retry after 1.02702661s: waiting for machine to come up
	I0314 19:25:21.998485  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:21.999019  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:21.999054  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:21.998955  993471 retry.go:31] will retry after 1.463514327s: waiting for machine to come up
	I0314 19:25:23.464556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:23.465087  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:23.465123  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:23.465035  993471 retry.go:31] will retry after 2.155372334s: waiting for machine to come up
	I0314 19:25:22.861284  992563 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081750952s)
	I0314 19:25:22.861324  992563 crio.go:451] duration metric: took 3.081885026s to extract the tarball
	I0314 19:25:22.861335  992563 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:25:22.907763  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:22.962568  992563 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:25:22.962593  992563 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:25:22.962602  992563 kubeadm.go:928] updating node { 192.168.61.88 8444 v1.28.4 crio true true} ...
	I0314 19:25:22.962756  992563 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-440341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:22.962851  992563 ssh_runner.go:195] Run: crio config
	I0314 19:25:23.020057  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:23.020092  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:23.020109  992563 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:23.020150  992563 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.88 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-440341 NodeName:default-k8s-diff-port-440341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:23.020354  992563 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.88
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-440341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:23.020441  992563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:25:23.031259  992563 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:23.031351  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:23.041703  992563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0314 19:25:23.061055  992563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:25:23.084905  992563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0314 19:25:23.108282  992563 ssh_runner.go:195] Run: grep 192.168.61.88	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:23.114097  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:23.134147  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:23.261318  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:23.280454  992563 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341 for IP: 192.168.61.88
	I0314 19:25:23.280483  992563 certs.go:194] generating shared ca certs ...
	I0314 19:25:23.280506  992563 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:23.280675  992563 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:23.280739  992563 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:23.280753  992563 certs.go:256] generating profile certs ...
	I0314 19:25:23.280872  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.key
	I0314 19:25:23.280971  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key.a3c32cf7
	I0314 19:25:23.281038  992563 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key
	I0314 19:25:23.281177  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:23.281219  992563 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:23.281232  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:23.281268  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:23.281300  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:23.281333  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:23.281389  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:23.282304  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:23.351284  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:23.402835  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:23.435934  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:23.467188  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 19:25:23.499760  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:23.528544  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:23.556740  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:23.584404  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:23.615693  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:23.643349  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:23.671793  992563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:23.692766  992563 ssh_runner.go:195] Run: openssl version
	I0314 19:25:23.699459  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:23.711735  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717022  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717078  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.723658  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:23.735141  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:23.746833  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753783  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753855  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.760817  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:23.772826  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:23.784241  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789107  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789170  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.795406  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:23.806969  992563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:23.811875  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:23.818337  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:23.826885  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:23.835278  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:23.843419  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:23.851515  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:23.860074  992563 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:23.860169  992563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:23.860241  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.902985  992563 cri.go:89] found id: ""
	I0314 19:25:23.903065  992563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:23.915686  992563 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:23.915711  992563 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:23.915718  992563 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:23.915776  992563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:23.926246  992563 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:23.927336  992563 kubeconfig.go:125] found "default-k8s-diff-port-440341" server: "https://192.168.61.88:8444"
	I0314 19:25:23.929693  992563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:23.940022  992563 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.88
	I0314 19:25:23.940053  992563 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:23.940067  992563 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:23.940135  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.982828  992563 cri.go:89] found id: ""
	I0314 19:25:23.982911  992563 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:24.001146  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:24.014973  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:24.015016  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:24.015069  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:25:24.024883  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:24.024954  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:24.034932  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:25:24.044680  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:24.044737  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:24.054865  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.064375  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:24.064440  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.075503  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:25:24.085139  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:24.085181  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:24.096092  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:24.106907  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.238605  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.990111  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.246192  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.325019  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.458340  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:25.458512  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.959178  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.459441  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.543678  992563 api_server.go:72] duration metric: took 1.085336822s to wait for apiserver process to appear ...
	I0314 19:25:26.543708  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:26.543734  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:26.544332  992563 api_server.go:269] stopped: https://192.168.61.88:8444/healthz: Get "https://192.168.61.88:8444/healthz": dial tcp 192.168.61.88:8444: connect: connection refused
	I0314 19:25:24.407953  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:26.408497  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:24.119417  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:24.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.120173  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.619362  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.119366  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.619644  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.119516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.619418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.120115  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.619593  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.621639  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:25.622189  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:25.622224  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:25.622129  993471 retry.go:31] will retry after 2.47317901s: waiting for machine to come up
	I0314 19:25:28.097250  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:28.097610  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:28.097640  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:28.097554  993471 retry.go:31] will retry after 2.923437953s: waiting for machine to come up
	I0314 19:25:27.044437  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.729256  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.729296  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:29.729321  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.752124  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.752162  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:30.044560  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.049804  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.049846  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:30.544454  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.558197  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.558237  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.043868  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.050468  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:31.050497  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.544657  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.549640  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:25:31.561049  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:25:31.561080  992563 api_server.go:131] duration metric: took 5.017362991s to wait for apiserver health ...
	I0314 19:25:31.561091  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:31.561101  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:31.563012  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:31.564434  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:25:31.594766  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:25:31.618252  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:25:31.632693  992563 system_pods.go:59] 8 kube-system pods found
	I0314 19:25:31.632743  992563 system_pods.go:61] "coredns-5dd5756b68-bkfks" [c4bc8ea9-9a0f-43df-9916-a9a7e42fc4e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:25:31.632752  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [936bfbcb-333a-45db-9cd1-b152c14bc623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:25:31.632758  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [0533b8e8-e66a-4f38-8d55-c813446a4406] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:25:31.632768  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [b31998b9-b575-430b-918e-b9c4a7c626d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:25:31.632773  992563 system_pods.go:61] "kube-proxy-249fd" [f8bafea7-bc78-4e48-ad55-3b913c3e2fd1] Running
	I0314 19:25:31.632778  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [99c2fc5a-61a5-4813-9042-dac771932708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:25:31.632786  992563 system_pods.go:61] "metrics-server-57f55c9bc5-t2hhv" [03b6608b-bea1-4605-b85d-c09f2c744118] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:25:31.632801  992563 system_pods.go:61] "storage-provisioner" [ec6d3122-f9c6-4f14-bc66-7cab18b88fa5] Running
	I0314 19:25:31.632811  992563 system_pods.go:74] duration metric: took 14.536847ms to wait for pod list to return data ...
	I0314 19:25:31.632818  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:25:31.636580  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:25:31.636606  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:25:31.636618  992563 node_conditions.go:105] duration metric: took 3.793367ms to run NodePressure ...
	I0314 19:25:31.636635  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:28.907100  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:30.908031  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:29.119861  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:29.620287  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.120113  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.619452  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.120315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.619667  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.120221  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.120292  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.619449  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.022404  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:31.022914  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:31.022950  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:31.022850  993471 retry.go:31] will retry after 4.138449888s: waiting for machine to come up
	I0314 19:25:31.874889  992563 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879729  992563 kubeadm.go:733] kubelet initialised
	I0314 19:25:31.879757  992563 kubeadm.go:734] duration metric: took 4.834353ms waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879768  992563 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:25:31.884949  992563 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.890443  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890467  992563 pod_ready.go:81] duration metric: took 5.495766ms for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.890475  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890485  992563 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.895241  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895275  992563 pod_ready.go:81] duration metric: took 4.778217ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.895289  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895300  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.900184  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900204  992563 pod_ready.go:81] duration metric: took 4.895049ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.900222  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900228  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.023193  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023224  992563 pod_ready.go:81] duration metric: took 122.987086ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:32.023236  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023242  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423939  992563 pod_ready.go:92] pod "kube-proxy-249fd" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:32.423972  992563 pod_ready.go:81] duration metric: took 400.720648ms for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423988  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:34.431140  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:36.432871  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:33.408652  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.906834  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:37.914444  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.165792  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166342  991880 main.go:141] libmachine: (no-preload-731976) Found IP for machine: 192.168.39.148
	I0314 19:25:35.166372  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has current primary IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166382  991880 main.go:141] libmachine: (no-preload-731976) Reserving static IP address...
	I0314 19:25:35.166707  991880 main.go:141] libmachine: (no-preload-731976) Reserved static IP address: 192.168.39.148
	I0314 19:25:35.166727  991880 main.go:141] libmachine: (no-preload-731976) Waiting for SSH to be available...
	I0314 19:25:35.166748  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.166781  991880 main.go:141] libmachine: (no-preload-731976) DBG | skip adding static IP to network mk-no-preload-731976 - found existing host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"}
	I0314 19:25:35.166800  991880 main.go:141] libmachine: (no-preload-731976) DBG | Getting to WaitForSSH function...
	I0314 19:25:35.169377  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169760  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.169795  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169926  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH client type: external
	I0314 19:25:35.169960  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa (-rw-------)
	I0314 19:25:35.169998  991880 main.go:141] libmachine: (no-preload-731976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:35.170022  991880 main.go:141] libmachine: (no-preload-731976) DBG | About to run SSH command:
	I0314 19:25:35.170036  991880 main.go:141] libmachine: (no-preload-731976) DBG | exit 0
	I0314 19:25:35.296417  991880 main.go:141] libmachine: (no-preload-731976) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:35.296801  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetConfigRaw
	I0314 19:25:35.297596  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.300253  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300720  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.300757  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300996  991880 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/config.json ...
	I0314 19:25:35.301205  991880 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:35.301229  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:35.301493  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.304165  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304600  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.304643  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304850  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.305119  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305292  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305468  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.305627  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.305863  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.305881  991880 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:35.421933  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:35.421969  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422269  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:25:35.422303  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422516  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.425530  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426039  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.426069  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.426476  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426646  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426807  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.426997  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.427179  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.427200  991880 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-731976 && echo "no-preload-731976" | sudo tee /etc/hostname
	I0314 19:25:35.558170  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-731976
	
	I0314 19:25:35.558216  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.561575  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562028  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.562059  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562372  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.562673  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.562874  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.563059  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.563234  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.563468  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.563495  991880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-731976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-731976/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-731976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:35.691282  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:35.691321  991880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:35.691412  991880 buildroot.go:174] setting up certificates
	I0314 19:25:35.691437  991880 provision.go:84] configureAuth start
	I0314 19:25:35.691454  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.691821  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.694807  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695223  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.695255  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695385  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.698118  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698519  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.698548  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698752  991880 provision.go:143] copyHostCerts
	I0314 19:25:35.698834  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:35.698872  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:35.698922  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:35.699019  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:35.699030  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:35.699051  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:35.699114  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:35.699156  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:35.699177  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:35.699240  991880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.no-preload-731976 san=[127.0.0.1 192.168.39.148 localhost minikube no-preload-731976]
	I0314 19:25:35.915177  991880 provision.go:177] copyRemoteCerts
	I0314 19:25:35.915240  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:35.915265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.918112  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918468  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.918499  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918607  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.918813  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.918989  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.919161  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.003712  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 19:25:36.037023  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:36.068063  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:36.101448  991880 provision.go:87] duration metric: took 409.997228ms to configureAuth
	I0314 19:25:36.101475  991880 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:36.101691  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:25:36.101783  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.104700  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.105138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105310  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.105536  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105733  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105885  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.106088  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.106325  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.106345  991880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:36.387809  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:36.387841  991880 machine.go:97] duration metric: took 1.086620225s to provisionDockerMachine
	I0314 19:25:36.387855  991880 start.go:293] postStartSetup for "no-preload-731976" (driver="kvm2")
	I0314 19:25:36.387869  991880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:36.387886  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.388286  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:36.388316  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.391292  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391742  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.391774  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391959  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.392203  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.392450  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.392637  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.477050  991880 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:36.482184  991880 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:36.482205  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:36.482270  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:36.482372  991880 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:36.482459  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:36.492716  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:36.520655  991880 start.go:296] duration metric: took 132.783495ms for postStartSetup
	I0314 19:25:36.520723  991880 fix.go:56] duration metric: took 20.459188473s for fixHost
	I0314 19:25:36.520761  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.523718  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.524138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524431  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.524648  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.524842  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.525031  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.525211  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.525425  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.525436  991880 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:36.633356  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444336.610892497
	
	I0314 19:25:36.633389  991880 fix.go:216] guest clock: 1710444336.610892497
	I0314 19:25:36.633400  991880 fix.go:229] Guest: 2024-03-14 19:25:36.610892497 +0000 UTC Remote: 2024-03-14 19:25:36.520738659 +0000 UTC m=+367.687364006 (delta=90.153838ms)
	I0314 19:25:36.633445  991880 fix.go:200] guest clock delta is within tolerance: 90.153838ms
	I0314 19:25:36.633457  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 20.57197992s
	I0314 19:25:36.633490  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.633778  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:36.636556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.636959  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.636991  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.637190  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637708  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637871  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637934  991880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:36.638009  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.638075  991880 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:36.638104  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.640821  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641207  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641236  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641272  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641489  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.641668  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641863  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641961  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.642002  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.642188  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.642394  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.642606  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.753962  991880 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:36.761020  991880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:36.916046  991880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:36.923607  991880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:36.923688  991880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:36.941685  991880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:36.941710  991880 start.go:494] detecting cgroup driver to use...
	I0314 19:25:36.941776  991880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:36.962019  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:36.977917  991880 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:36.977982  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:36.995378  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:37.010859  991880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:37.145828  991880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:37.310805  991880 docker.go:233] disabling docker service ...
	I0314 19:25:37.310893  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:37.327346  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:37.342143  991880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:37.485925  991880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:37.607814  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:37.623068  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:37.644387  991880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:37.644455  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.655919  991880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:37.655992  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.669290  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.681601  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.694022  991880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:37.705793  991880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:37.716260  991880 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:37.716307  991880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:37.732112  991880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:37.749555  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:37.868548  991880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:38.023735  991880 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:38.023821  991880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:38.029414  991880 start.go:562] Will wait 60s for crictl version
	I0314 19:25:38.029481  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.033985  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:38.077012  991880 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:38.077102  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.109155  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.146003  991880 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 19:25:34.119724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:34.620261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.119543  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.620151  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.119893  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.619442  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.119326  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.619427  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.119766  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.619711  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.147344  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:38.149841  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150180  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:38.150217  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150608  991880 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:38.155598  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:38.172048  991880 kubeadm.go:877] updating cluster {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:38.172187  991880 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 19:25:38.172260  991880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:38.220190  991880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 19:25:38.220232  991880 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:25:38.220291  991880 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.220313  991880 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.220345  991880 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.220378  991880 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 19:25:38.220395  991880 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.220486  991880 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.220484  991880 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.220724  991880 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.221960  991880 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 19:25:38.222035  991880 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.222177  991880 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.222230  991880 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.222271  991880 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.222272  991880 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.372514  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 19:25:38.384051  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.388330  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.395017  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.397902  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.409638  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.431681  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.501339  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.590670  991880 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 19:25:38.590775  991880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 19:25:38.590838  991880 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 19:25:38.590853  991880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.590860  991880 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590797  991880 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.591036  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627618  991880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 19:25:38.627667  991880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.627716  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627732  991880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 19:25:38.627769  991880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.627826  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648107  991880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 19:25:38.648128  991880 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648279  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.648277  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.648335  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.648346  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.648374  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.783957  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 19:25:38.784024  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.784071  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.784097  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.784197  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.788609  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788695  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788719  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 19:25:38.788782  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.788797  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:38.788856  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.788931  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.849452  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 19:25:38.849488  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 19:25:38.849505  991880 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849554  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849563  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:38.849617  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 19:25:38.849624  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.849552  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849645  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849672  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849739  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.854753  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 19:25:38.933788  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.435999  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:39.436021  992563 pod_ready.go:81] duration metric: took 7.012025071s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:39.436031  992563 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:41.445517  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:40.407508  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:42.907630  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.120157  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:39.620116  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.119693  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.120192  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.619323  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.119637  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.619724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.619799  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.952723  991880 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (4.102947708s)
	I0314 19:25:42.952761  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 19:25:42.952762  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103172862s)
	I0314 19:25:42.952791  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 19:25:42.952821  991880 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:42.952878  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:43.943582  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.945997  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.407780  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:44.119609  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:44.619260  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.119599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.619665  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.120008  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.619297  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.119435  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.619512  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.119521  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.619320  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.022375  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.069465444s)
	I0314 19:25:45.022413  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 19:25:45.022458  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:45.022539  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:48.091412  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.068839048s)
	I0314 19:25:48.091449  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 19:25:48.091482  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.091536  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.451322  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.944057  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:47.957506  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.408381  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:52.906494  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:49.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.619796  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.120279  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.619408  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.120076  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.619516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.119566  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.620268  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.120329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.619847  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.657504  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.565934426s)
	I0314 19:25:49.657542  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 19:25:49.657578  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:49.657646  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:52.134720  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477038469s)
	I0314 19:25:52.134760  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 19:25:52.134794  991880 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:52.134888  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:53.095193  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 19:25:53.095258  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.095337  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.447791  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:55.944276  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.907556  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:57.406376  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.119981  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.620180  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.119616  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.619375  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.119240  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.619922  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.120288  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.119329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.620315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.949310  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.853945012s)
	I0314 19:25:54.949346  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 19:25:54.949374  991880 cache_images.go:123] Successfully loaded all cached images
	I0314 19:25:54.949385  991880 cache_images.go:92] duration metric: took 16.729134981s to LoadCachedImages
	I0314 19:25:54.949398  991880 kubeadm.go:928] updating node { 192.168.39.148 8443 v1.29.0-rc.2 crio true true} ...
	I0314 19:25:54.949542  991880 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-731976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:54.949667  991880 ssh_runner.go:195] Run: crio config
	I0314 19:25:55.001838  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:25:55.001869  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:55.001885  991880 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:55.001916  991880 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.148 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-731976 NodeName:no-preload-731976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:55.002121  991880 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-731976"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:55.002212  991880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 19:25:55.014769  991880 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:55.014842  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:55.026082  991880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 19:25:55.049071  991880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 19:25:55.071131  991880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 19:25:55.093566  991880 ssh_runner.go:195] Run: grep 192.168.39.148	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:55.098332  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:55.113424  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:55.260159  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:55.283145  991880 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976 for IP: 192.168.39.148
	I0314 19:25:55.283174  991880 certs.go:194] generating shared ca certs ...
	I0314 19:25:55.283197  991880 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:55.283377  991880 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:55.283441  991880 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:55.283455  991880 certs.go:256] generating profile certs ...
	I0314 19:25:55.283564  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.key
	I0314 19:25:55.283661  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key.5587cb42
	I0314 19:25:55.283720  991880 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key
	I0314 19:25:55.283895  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:55.283948  991880 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:55.283962  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:55.283993  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:55.284031  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:55.284066  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:55.284121  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:55.284976  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:55.326779  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:55.376167  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:55.405828  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:55.458807  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 19:25:55.494051  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:55.531015  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:55.559184  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:55.588905  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:55.616661  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:55.646728  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:55.673995  991880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:55.692276  991880 ssh_runner.go:195] Run: openssl version
	I0314 19:25:55.698918  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:55.711703  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717107  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717177  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.723435  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:55.736575  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:55.749982  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755614  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755680  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.762122  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:55.774447  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:55.786787  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791855  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791901  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.798041  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:55.810324  991880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:55.815698  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:55.822389  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:55.829046  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:55.835660  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:55.843075  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:55.849353  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:55.855678  991880 kubeadm.go:391] StartCluster: {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:55.855799  991880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:55.855834  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.906341  991880 cri.go:89] found id: ""
	I0314 19:25:55.906408  991880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:55.918790  991880 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:55.918819  991880 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:55.918826  991880 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:55.918875  991880 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:55.929988  991880 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:55.931422  991880 kubeconfig.go:125] found "no-preload-731976" server: "https://192.168.39.148:8443"
	I0314 19:25:55.933865  991880 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:55.946711  991880 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.148
	I0314 19:25:55.946743  991880 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:55.946757  991880 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:55.946812  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.998884  991880 cri.go:89] found id: ""
	I0314 19:25:55.998971  991880 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:56.018919  991880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:56.030467  991880 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:56.030497  991880 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:56.030558  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:56.041403  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:56.041465  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:56.052140  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:56.062366  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:56.062420  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:56.075847  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.086246  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:56.086295  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.097148  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:56.106718  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:56.106756  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:56.118337  991880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:56.131893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:56.264399  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.282634  991880 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.018196302s)
	I0314 19:25:57.282664  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.524172  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.626554  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.772151  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:57.772255  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.273445  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.772397  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.803788  991880 api_server.go:72] duration metric: took 1.031637073s to wait for apiserver process to appear ...
	I0314 19:25:58.803816  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:58.803835  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:25:58.804392  991880 api_server.go:269] stopped: https://192.168.39.148:8443/healthz: Get "https://192.168.39.148:8443/healthz": dial tcp 192.168.39.148:8443: connect: connection refused
	I0314 19:25:58.445134  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:00.447429  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.304059  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.588183  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.588231  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.588251  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.632993  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.633030  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.804404  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.862306  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:01.862370  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.304525  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.309902  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:02.309933  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.804296  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.812245  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:26:02.830235  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:26:02.830268  991880 api_server.go:131] duration metric: took 4.026443836s to wait for apiserver health ...
	I0314 19:26:02.830281  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:26:02.830289  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:26:02.832051  991880 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:59.407314  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:01.906570  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.120306  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:59.620183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.119877  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.619283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.119314  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.620175  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:02.120113  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:02.120198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:02.173354  992344 cri.go:89] found id: ""
	I0314 19:26:02.173388  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.173421  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:02.173430  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:02.173509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:02.213519  992344 cri.go:89] found id: ""
	I0314 19:26:02.213555  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.213567  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:02.213574  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:02.213689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:02.259387  992344 cri.go:89] found id: ""
	I0314 19:26:02.259423  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.259435  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:02.259443  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:02.259511  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:02.308335  992344 cri.go:89] found id: ""
	I0314 19:26:02.308362  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.308373  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:02.308381  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:02.308441  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:02.353065  992344 cri.go:89] found id: ""
	I0314 19:26:02.353092  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.353101  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:02.353106  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:02.353183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:02.394305  992344 cri.go:89] found id: ""
	I0314 19:26:02.394342  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.394355  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:02.394365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:02.394443  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:02.441693  992344 cri.go:89] found id: ""
	I0314 19:26:02.441731  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.441743  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:02.441751  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:02.441816  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:02.479786  992344 cri.go:89] found id: ""
	I0314 19:26:02.479810  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.479818  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:02.479827  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:02.479858  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:02.494835  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:02.494865  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:02.660069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:02.660114  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:02.660134  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:02.732148  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:02.732187  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:02.780910  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:02.780942  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:02.833411  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:26:02.852441  991880 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:26:02.875033  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:26:02.891957  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:26:02.892003  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:26:02.892016  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:26:02.892034  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:26:02.892045  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:26:02.892062  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:26:02.892072  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:26:02.892087  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:26:02.892098  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:26:02.892109  991880 system_pods.go:74] duration metric: took 17.053651ms to wait for pod list to return data ...
	I0314 19:26:02.892122  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:26:02.896049  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:26:02.896076  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:26:02.896087  991880 node_conditions.go:105] duration metric: took 3.958558ms to run NodePressure ...
	I0314 19:26:02.896104  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:26:03.183167  991880 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187696  991880 kubeadm.go:733] kubelet initialised
	I0314 19:26:03.187722  991880 kubeadm.go:734] duration metric: took 4.517639ms waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187734  991880 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:03.193263  991880 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.198068  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198100  991880 pod_ready.go:81] duration metric: took 4.803067ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.198112  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198125  991880 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.202418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202440  991880 pod_ready.go:81] duration metric: took 4.299898ms for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.202453  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202458  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.207418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207440  991880 pod_ready.go:81] duration metric: took 4.975588ms for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.207447  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207453  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.278880  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278907  991880 pod_ready.go:81] duration metric: took 71.446692ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.278916  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278922  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.679262  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679298  991880 pod_ready.go:81] duration metric: took 400.3668ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.679308  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679315  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.078953  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.078992  991880 pod_ready.go:81] duration metric: took 399.668454ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.079014  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.079023  991880 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.479041  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479069  991880 pod_ready.go:81] duration metric: took 400.034338ms for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.479078  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479084  991880 pod_ready.go:38] duration metric: took 1.291340313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:04.479109  991880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:26:04.493423  991880 ops.go:34] apiserver oom_adj: -16
	I0314 19:26:04.493444  991880 kubeadm.go:591] duration metric: took 8.574611355s to restartPrimaryControlPlane
	I0314 19:26:04.493451  991880 kubeadm.go:393] duration metric: took 8.63778247s to StartCluster
	I0314 19:26:04.493495  991880 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.493576  991880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:26:04.495275  991880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.495648  991880 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:26:04.497346  991880 out.go:177] * Verifying Kubernetes components...
	I0314 19:26:04.495716  991880 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:26:04.495843  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:26:04.498678  991880 addons.go:69] Setting storage-provisioner=true in profile "no-preload-731976"
	I0314 19:26:04.498694  991880 addons.go:69] Setting metrics-server=true in profile "no-preload-731976"
	I0314 19:26:04.498719  991880 addons.go:234] Setting addon metrics-server=true in "no-preload-731976"
	I0314 19:26:04.498724  991880 addons.go:234] Setting addon storage-provisioner=true in "no-preload-731976"
	W0314 19:26:04.498725  991880 addons.go:243] addon metrics-server should already be in state true
	W0314 19:26:04.498735  991880 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:26:04.498755  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498764  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498685  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:26:04.498684  991880 addons.go:69] Setting default-storageclass=true in profile "no-preload-731976"
	I0314 19:26:04.498902  991880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-731976"
	I0314 19:26:04.499116  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499128  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499146  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499151  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499275  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499306  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.515926  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42749
	I0314 19:26:04.516541  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.517336  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.517380  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.517903  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.518496  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.518530  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.519804  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0314 19:26:04.519877  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
	I0314 19:26:04.520294  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520433  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520844  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.520874  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521224  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.521279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.521294  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521512  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.521839  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.522431  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.522462  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.524933  991880 addons.go:234] Setting addon default-storageclass=true in "no-preload-731976"
	W0314 19:26:04.524953  991880 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:26:04.524977  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.525238  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.525267  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.535073  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I0314 19:26:04.535555  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.536084  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.536110  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.536455  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.536608  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.537991  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0314 19:26:04.538320  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.538560  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.540272  991880 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:26:04.539087  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.541445  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.541556  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:26:04.541574  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:26:04.541590  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.541837  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.542000  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.544178  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.544425  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0314 19:26:04.545832  991880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:26:04.544882  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.544974  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.545887  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.545912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.545529  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.547028  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.547051  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.546085  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.547153  991880 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.547187  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:26:04.547205  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.547263  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.547420  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.547492  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.548137  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.548250  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.549851  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550280  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.550310  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550441  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.550642  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.550806  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.550933  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.594092  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0314 19:26:04.594507  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.595046  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.595068  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.595380  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.595600  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.597532  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.597819  991880 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.597841  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:26:04.597860  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.600392  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600790  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.600822  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600932  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.601112  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.601282  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.601422  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.698561  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:26:04.717893  991880 node_ready.go:35] waiting up to 6m0s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:04.789158  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.874271  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:26:04.874299  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:26:04.897643  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.915424  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:26:04.915447  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:26:04.962912  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:04.962936  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:26:05.037223  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:05.140432  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140464  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.140791  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.140832  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.140858  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140873  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.141237  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.141256  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.141341  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:05.147523  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.147539  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.147796  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.147815  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.021360  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.123678174s)
	I0314 19:26:06.021425  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.021439  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023327  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.023341  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.023364  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023384  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.023398  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023662  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.025042  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023698  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.063870  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026588061s)
	I0314 19:26:06.063950  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.063961  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064301  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.064325  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064381  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064395  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.064404  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064668  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064685  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064698  991880 addons.go:470] Verifying addon metrics-server=true in "no-preload-731976"
	I0314 19:26:06.066642  991880 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:26:02.945917  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.446120  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:03.906603  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.908049  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.359638  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:05.377722  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:05.377799  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:05.436279  992344 cri.go:89] found id: ""
	I0314 19:26:05.436316  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.436330  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:05.436338  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:05.436402  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:05.482775  992344 cri.go:89] found id: ""
	I0314 19:26:05.482822  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.482853  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:05.482861  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:05.482933  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:05.542954  992344 cri.go:89] found id: ""
	I0314 19:26:05.542986  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.542996  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:05.543003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:05.543069  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:05.582596  992344 cri.go:89] found id: ""
	I0314 19:26:05.582630  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.582643  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:05.582651  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:05.582716  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:05.623720  992344 cri.go:89] found id: ""
	I0314 19:26:05.623750  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.623762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:05.623770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:05.623828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:05.669868  992344 cri.go:89] found id: ""
	I0314 19:26:05.669946  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.669962  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:05.669974  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:05.670045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:05.718786  992344 cri.go:89] found id: ""
	I0314 19:26:05.718816  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.718827  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:05.718834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:05.718905  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:05.761781  992344 cri.go:89] found id: ""
	I0314 19:26:05.761817  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.761828  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:05.761841  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:05.761856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:05.826095  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:05.826131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:05.842893  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:05.842928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:05.937536  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:05.937567  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:05.937585  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:06.013419  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:06.013465  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:08.560995  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:08.576897  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:08.576964  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:08.617367  992344 cri.go:89] found id: ""
	I0314 19:26:08.617395  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.617406  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:08.617412  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:08.617471  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:08.655448  992344 cri.go:89] found id: ""
	I0314 19:26:08.655480  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.655492  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:08.655498  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:08.656004  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:08.696167  992344 cri.go:89] found id: ""
	I0314 19:26:08.696197  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.696206  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:08.696231  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:08.696294  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:06.067992  991880 addons.go:505] duration metric: took 1.572277081s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:26:06.722889  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:07.943306  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:09.945181  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.407517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:10.908715  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.736061  992344 cri.go:89] found id: ""
	I0314 19:26:08.736088  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.736096  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:08.736102  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:08.736168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:08.782458  992344 cri.go:89] found id: ""
	I0314 19:26:08.782490  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.782501  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:08.782508  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:08.782585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:08.833616  992344 cri.go:89] found id: ""
	I0314 19:26:08.833647  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.833659  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:08.833667  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:08.833734  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:08.875871  992344 cri.go:89] found id: ""
	I0314 19:26:08.875900  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.875909  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:08.875914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:08.875972  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:08.921763  992344 cri.go:89] found id: ""
	I0314 19:26:08.921793  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.921804  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:08.921816  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:08.921834  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:08.937716  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:08.937748  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:09.024271  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.024295  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:09.024309  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:09.098600  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:09.098636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:09.146178  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:09.146226  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:11.698261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:11.715209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:11.715285  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:11.758631  992344 cri.go:89] found id: ""
	I0314 19:26:11.758664  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.758680  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:11.758688  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:11.758758  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:11.798229  992344 cri.go:89] found id: ""
	I0314 19:26:11.798258  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.798268  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:11.798274  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:11.798341  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:11.838801  992344 cri.go:89] found id: ""
	I0314 19:26:11.838837  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.838849  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:11.838857  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:11.838925  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:11.884460  992344 cri.go:89] found id: ""
	I0314 19:26:11.884495  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.884507  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:11.884515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:11.884577  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:11.937743  992344 cri.go:89] found id: ""
	I0314 19:26:11.937770  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.937781  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:11.937789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:11.937852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:12.007509  992344 cri.go:89] found id: ""
	I0314 19:26:12.007542  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.007552  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:12.007561  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:12.007640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:12.068478  992344 cri.go:89] found id: ""
	I0314 19:26:12.068514  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.068523  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:12.068529  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:12.068592  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:12.108658  992344 cri.go:89] found id: ""
	I0314 19:26:12.108699  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.108712  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:12.108725  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:12.108754  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:12.195134  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:12.195170  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:12.240710  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:12.240746  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:12.297470  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:12.297506  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:12.312552  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:12.312581  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:12.392069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.222189  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:11.223717  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:12.226297  991880 node_ready.go:49] node "no-preload-731976" has status "Ready":"True"
	I0314 19:26:12.226328  991880 node_ready.go:38] duration metric: took 7.508398002s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:12.226343  991880 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:12.234015  991880 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242287  991880 pod_ready.go:92] pod "coredns-76f75df574-mcddh" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:12.242314  991880 pod_ready.go:81] duration metric: took 8.261811ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242325  991880 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252237  991880 pod_ready.go:92] pod "etcd-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:13.252268  991880 pod_ready.go:81] duration metric: took 1.00993426s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252277  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.443709  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.943804  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:13.407905  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:15.906891  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.907361  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.893036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:14.909532  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:14.909603  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:14.958974  992344 cri.go:89] found id: ""
	I0314 19:26:14.959001  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.959010  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:14.959016  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:14.959071  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:14.996462  992344 cri.go:89] found id: ""
	I0314 19:26:14.996496  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.996509  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:14.996516  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:14.996584  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:15.038159  992344 cri.go:89] found id: ""
	I0314 19:26:15.038192  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.038200  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:15.038214  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:15.038280  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:15.077455  992344 cri.go:89] found id: ""
	I0314 19:26:15.077486  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.077498  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:15.077506  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:15.077595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:15.117873  992344 cri.go:89] found id: ""
	I0314 19:26:15.117905  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.117914  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:15.117921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:15.117984  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:15.156493  992344 cri.go:89] found id: ""
	I0314 19:26:15.156528  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.156541  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:15.156549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:15.156615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:15.195036  992344 cri.go:89] found id: ""
	I0314 19:26:15.195065  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.195073  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:15.195079  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:15.195131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:15.237570  992344 cri.go:89] found id: ""
	I0314 19:26:15.237607  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.237619  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:15.237631  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:15.237646  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:15.323818  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:15.323871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:15.370068  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:15.370110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:15.425984  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:15.426018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.442475  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:15.442513  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:15.519714  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.019937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:18.036457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:18.036534  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:18.076226  992344 cri.go:89] found id: ""
	I0314 19:26:18.076256  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.076268  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:18.076275  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:18.076339  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:18.112355  992344 cri.go:89] found id: ""
	I0314 19:26:18.112390  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.112401  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:18.112409  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:18.112475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:18.148502  992344 cri.go:89] found id: ""
	I0314 19:26:18.148533  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.148544  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:18.148551  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:18.148625  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:18.185085  992344 cri.go:89] found id: ""
	I0314 19:26:18.185114  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.185121  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:18.185127  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:18.185192  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:18.226487  992344 cri.go:89] found id: ""
	I0314 19:26:18.226512  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.226520  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:18.226527  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:18.226595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:18.274014  992344 cri.go:89] found id: ""
	I0314 19:26:18.274044  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.274053  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:18.274062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:18.274155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:18.318696  992344 cri.go:89] found id: ""
	I0314 19:26:18.318729  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.318741  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:18.318749  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:18.318821  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:18.361430  992344 cri.go:89] found id: ""
	I0314 19:26:18.361459  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.361467  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:18.361477  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:18.361489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:18.442041  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.442062  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:18.442082  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:18.522821  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:18.522863  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:18.565896  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:18.565935  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:18.620887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:18.620924  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.268738  991880 pod_ready.go:102] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:16.758759  991880 pod_ready.go:92] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.758794  991880 pod_ready.go:81] duration metric: took 3.50650262s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.758807  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.763984  991880 pod_ready.go:92] pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.764010  991880 pod_ready.go:81] duration metric: took 5.192518ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.764021  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770418  991880 pod_ready.go:92] pod "kube-proxy-fkn7b" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.770442  991880 pod_ready.go:81] duration metric: took 6.412988ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770453  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775342  991880 pod_ready.go:92] pod "kube-scheduler-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.775367  991880 pod_ready.go:81] duration metric: took 4.906261ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775378  991880 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:18.782444  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.443755  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.446058  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.907866  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.407195  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.136379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:21.153065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:21.153159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:21.198345  992344 cri.go:89] found id: ""
	I0314 19:26:21.198376  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.198386  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:21.198393  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:21.198465  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:21.240699  992344 cri.go:89] found id: ""
	I0314 19:26:21.240738  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.240747  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:21.240753  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:21.240805  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:21.280891  992344 cri.go:89] found id: ""
	I0314 19:26:21.280978  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.280994  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:21.281004  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:21.281074  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:21.320316  992344 cri.go:89] found id: ""
	I0314 19:26:21.320348  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.320360  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:21.320369  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:21.320428  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:21.367972  992344 cri.go:89] found id: ""
	I0314 19:26:21.368006  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.368018  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:21.368024  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:21.368091  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:21.406060  992344 cri.go:89] found id: ""
	I0314 19:26:21.406090  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.406101  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:21.406108  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:21.406175  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:21.450885  992344 cri.go:89] found id: ""
	I0314 19:26:21.450908  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.450927  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:21.450933  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:21.450992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:21.497391  992344 cri.go:89] found id: ""
	I0314 19:26:21.497424  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.497436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:21.497453  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:21.497471  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:21.547789  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:21.547819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:21.604433  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:21.604482  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:21.619977  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:21.620005  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:21.695604  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:21.695629  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:21.695643  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:20.782765  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.786856  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.943234  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:23.943336  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:25.944005  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.407901  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:26.906562  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.274618  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:24.290815  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:24.290891  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:24.330657  992344 cri.go:89] found id: ""
	I0314 19:26:24.330694  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.330706  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:24.330718  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:24.330788  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:24.373140  992344 cri.go:89] found id: ""
	I0314 19:26:24.373192  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.373206  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:24.373214  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:24.373295  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:24.412131  992344 cri.go:89] found id: ""
	I0314 19:26:24.412161  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.412183  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:24.412191  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:24.412281  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:24.453506  992344 cri.go:89] found id: ""
	I0314 19:26:24.453535  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.453546  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:24.453554  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:24.453621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:24.495345  992344 cri.go:89] found id: ""
	I0314 19:26:24.495379  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.495391  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:24.495399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:24.495468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:24.534744  992344 cri.go:89] found id: ""
	I0314 19:26:24.534770  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.534779  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:24.534785  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:24.534847  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:24.573594  992344 cri.go:89] found id: ""
	I0314 19:26:24.573621  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.573629  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:24.573635  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:24.573685  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:24.612677  992344 cri.go:89] found id: ""
	I0314 19:26:24.612708  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.612718  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:24.612730  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:24.612747  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:24.664393  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:24.664426  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:24.679911  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:24.679945  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:24.767513  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:24.767560  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:24.767580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:24.853448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:24.853491  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.398576  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:27.414665  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:27.414749  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:27.461901  992344 cri.go:89] found id: ""
	I0314 19:26:27.461930  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.461938  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:27.461944  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:27.462009  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:27.502865  992344 cri.go:89] found id: ""
	I0314 19:26:27.502893  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.502902  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:27.502908  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:27.502966  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:27.542327  992344 cri.go:89] found id: ""
	I0314 19:26:27.542374  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.542387  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:27.542396  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:27.542484  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:27.583269  992344 cri.go:89] found id: ""
	I0314 19:26:27.583295  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.583304  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:27.583310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:27.583375  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:27.620426  992344 cri.go:89] found id: ""
	I0314 19:26:27.620467  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.620483  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:27.620491  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:27.620560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:27.659165  992344 cri.go:89] found id: ""
	I0314 19:26:27.659198  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.659214  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:27.659222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:27.659291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:27.701565  992344 cri.go:89] found id: ""
	I0314 19:26:27.701600  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.701609  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:27.701615  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:27.701706  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:27.739782  992344 cri.go:89] found id: ""
	I0314 19:26:27.739813  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.739822  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:27.739832  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:27.739847  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:27.757112  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:27.757146  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:27.844634  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:27.844670  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:27.844688  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:27.928687  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:27.928720  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.976582  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:27.976614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:25.282663  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:27.783551  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.443159  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.943660  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.908305  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.908486  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.536573  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:30.551552  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:30.551624  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.590498  992344 cri.go:89] found id: ""
	I0314 19:26:30.590528  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.590541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:30.590550  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:30.590612  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:30.629891  992344 cri.go:89] found id: ""
	I0314 19:26:30.629922  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.629945  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:30.629960  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:30.630031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:30.672557  992344 cri.go:89] found id: ""
	I0314 19:26:30.672592  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.672604  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:30.672611  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:30.672675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:30.709889  992344 cri.go:89] found id: ""
	I0314 19:26:30.709998  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.710026  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:30.710034  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:30.710103  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:30.749044  992344 cri.go:89] found id: ""
	I0314 19:26:30.749078  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.749090  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:30.749097  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:30.749167  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:30.794111  992344 cri.go:89] found id: ""
	I0314 19:26:30.794136  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.794146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:30.794154  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:30.794229  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:30.837175  992344 cri.go:89] found id: ""
	I0314 19:26:30.837204  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.837213  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:30.837220  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:30.837276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:30.875977  992344 cri.go:89] found id: ""
	I0314 19:26:30.876012  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.876026  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:30.876039  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:30.876077  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:30.965922  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:30.965963  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:31.011002  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:31.011041  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:31.067381  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:31.067415  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:31.082515  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:31.082547  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:31.158951  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:33.659376  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:33.673829  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:33.673889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.283175  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:32.285501  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.446301  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.942963  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.407396  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.906104  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.718619  992344 cri.go:89] found id: ""
	I0314 19:26:33.718655  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.718667  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:33.718675  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:33.718752  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:33.760408  992344 cri.go:89] found id: ""
	I0314 19:26:33.760443  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.760455  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:33.760463  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:33.760532  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:33.803648  992344 cri.go:89] found id: ""
	I0314 19:26:33.803683  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.803697  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:33.803706  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:33.803770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:33.845297  992344 cri.go:89] found id: ""
	I0314 19:26:33.845332  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.845344  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:33.845352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:33.845420  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:33.885826  992344 cri.go:89] found id: ""
	I0314 19:26:33.885862  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.885873  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:33.885881  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:33.885953  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:33.930611  992344 cri.go:89] found id: ""
	I0314 19:26:33.930641  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.930652  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:33.930659  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:33.930720  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:33.975523  992344 cri.go:89] found id: ""
	I0314 19:26:33.975558  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.975569  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:33.975592  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:33.975671  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:34.021004  992344 cri.go:89] found id: ""
	I0314 19:26:34.021039  992344 logs.go:276] 0 containers: []
	W0314 19:26:34.021048  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:34.021058  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:34.021072  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.066775  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:34.066808  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:34.123513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:34.123555  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:34.138355  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:34.138390  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:34.210698  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:34.210733  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:34.210752  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:36.801398  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:36.818486  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:36.818561  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:36.864485  992344 cri.go:89] found id: ""
	I0314 19:26:36.864510  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.864519  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:36.864525  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:36.864585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:36.908438  992344 cri.go:89] found id: ""
	I0314 19:26:36.908468  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.908478  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:36.908486  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:36.908554  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:36.947578  992344 cri.go:89] found id: ""
	I0314 19:26:36.947605  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.947613  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:36.947618  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:36.947664  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:36.985495  992344 cri.go:89] found id: ""
	I0314 19:26:36.985526  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.985537  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:36.985545  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:36.985609  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:37.027897  992344 cri.go:89] found id: ""
	I0314 19:26:37.027929  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.027947  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:37.027955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:37.028024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:37.066665  992344 cri.go:89] found id: ""
	I0314 19:26:37.066702  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.066716  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:37.066726  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:37.066818  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:37.104882  992344 cri.go:89] found id: ""
	I0314 19:26:37.104911  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.104920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:37.104926  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:37.104989  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:37.150288  992344 cri.go:89] found id: ""
	I0314 19:26:37.150318  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.150326  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:37.150338  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:37.150356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:37.207269  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:37.207314  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:37.222256  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:37.222290  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:37.305854  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:37.305879  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:37.305894  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:37.391306  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:37.391343  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.784650  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:37.283602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.444420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.943754  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.406563  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.407414  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.905944  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:39.939379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:39.955255  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:39.955317  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:39.996585  992344 cri.go:89] found id: ""
	I0314 19:26:39.996618  992344 logs.go:276] 0 containers: []
	W0314 19:26:39.996627  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:39.996633  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:39.996698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:40.038725  992344 cri.go:89] found id: ""
	I0314 19:26:40.038761  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.038774  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:40.038782  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:40.038846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:40.080619  992344 cri.go:89] found id: ""
	I0314 19:26:40.080656  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.080668  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:40.080677  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:40.080742  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:40.122120  992344 cri.go:89] found id: ""
	I0314 19:26:40.122163  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.122174  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:40.122182  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:40.122248  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:40.161563  992344 cri.go:89] found id: ""
	I0314 19:26:40.161594  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.161605  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:40.161612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:40.161680  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:40.200236  992344 cri.go:89] found id: ""
	I0314 19:26:40.200267  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.200278  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:40.200287  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:40.200358  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:40.237537  992344 cri.go:89] found id: ""
	I0314 19:26:40.237570  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.237581  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:40.237588  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:40.237657  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:40.293038  992344 cri.go:89] found id: ""
	I0314 19:26:40.293070  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.293078  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:40.293086  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:40.293110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:40.307710  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:40.307742  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:40.385255  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:40.385278  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:40.385312  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:40.469385  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:40.469421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:40.513030  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:40.513064  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.069286  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:43.086066  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:43.086183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:43.131373  992344 cri.go:89] found id: ""
	I0314 19:26:43.131400  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.131408  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:43.131414  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:43.131491  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:43.175283  992344 cri.go:89] found id: ""
	I0314 19:26:43.175311  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.175319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:43.175325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:43.175385  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:43.214979  992344 cri.go:89] found id: ""
	I0314 19:26:43.215006  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.215014  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:43.215020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:43.215072  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:43.252071  992344 cri.go:89] found id: ""
	I0314 19:26:43.252101  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.252110  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:43.252136  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:43.252200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:43.290310  992344 cri.go:89] found id: ""
	I0314 19:26:43.290341  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.290352  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:43.290359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:43.290426  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:43.330639  992344 cri.go:89] found id: ""
	I0314 19:26:43.330673  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.330684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:43.330692  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:43.330761  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:43.372669  992344 cri.go:89] found id: ""
	I0314 19:26:43.372698  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.372706  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:43.372712  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:43.372775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:43.416118  992344 cri.go:89] found id: ""
	I0314 19:26:43.416154  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.416171  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:43.416184  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:43.416225  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:43.501495  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:43.501541  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:43.545898  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:43.545932  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.601172  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:43.601205  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:43.616307  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:43.616339  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:43.699003  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:39.782316  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:41.783130  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:43.783259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.943853  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:45.446214  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:44.907328  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.406383  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:46.199661  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:46.214256  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:46.214325  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:46.263891  992344 cri.go:89] found id: ""
	I0314 19:26:46.263921  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.263932  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:46.263940  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:46.264006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:46.303515  992344 cri.go:89] found id: ""
	I0314 19:26:46.303542  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.303551  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:46.303558  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:46.303634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:46.346323  992344 cri.go:89] found id: ""
	I0314 19:26:46.346358  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.346371  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:46.346378  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:46.346444  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:46.388459  992344 cri.go:89] found id: ""
	I0314 19:26:46.388490  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.388500  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:46.388507  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:46.388560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:46.428907  992344 cri.go:89] found id: ""
	I0314 19:26:46.428945  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.428957  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:46.428966  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:46.429032  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:46.475683  992344 cri.go:89] found id: ""
	I0314 19:26:46.475713  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.475724  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:46.475737  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:46.475803  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:46.514509  992344 cri.go:89] found id: ""
	I0314 19:26:46.514543  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.514552  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:46.514558  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:46.514621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:46.553984  992344 cri.go:89] found id: ""
	I0314 19:26:46.554012  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.554023  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:46.554036  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:46.554054  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:46.615513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:46.615548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:46.630491  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:46.630525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:46.733214  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:46.733250  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:46.733267  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:46.832662  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:46.832699  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:45.783361  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:48.283626  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.943882  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.944184  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.409215  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.907278  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.382361  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:49.398159  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:49.398220  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:49.441989  992344 cri.go:89] found id: ""
	I0314 19:26:49.442017  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.442027  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:49.442034  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:49.442110  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:49.484456  992344 cri.go:89] found id: ""
	I0314 19:26:49.484492  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.484503  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:49.484520  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:49.484587  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:49.522409  992344 cri.go:89] found id: ""
	I0314 19:26:49.522438  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.522449  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:49.522456  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:49.522509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:49.556955  992344 cri.go:89] found id: ""
	I0314 19:26:49.556983  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.556991  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:49.556996  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:49.557045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:49.597924  992344 cri.go:89] found id: ""
	I0314 19:26:49.597960  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.597971  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:49.597987  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:49.598054  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:49.635744  992344 cri.go:89] found id: ""
	I0314 19:26:49.635780  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.635793  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:49.635801  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:49.635869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:49.678085  992344 cri.go:89] found id: ""
	I0314 19:26:49.678124  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.678136  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:49.678144  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:49.678247  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:49.714483  992344 cri.go:89] found id: ""
	I0314 19:26:49.714515  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.714527  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:49.714538  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:49.714554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:49.760438  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:49.760473  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:49.818954  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:49.818992  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:49.835609  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:49.835642  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:49.928723  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:49.928747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:49.928759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:52.517455  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:52.534986  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:52.535066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:52.580240  992344 cri.go:89] found id: ""
	I0314 19:26:52.580279  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.580292  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:52.580301  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:52.580367  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:52.644053  992344 cri.go:89] found id: ""
	I0314 19:26:52.644085  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.644096  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:52.644103  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:52.644171  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:52.706892  992344 cri.go:89] found id: ""
	I0314 19:26:52.706919  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.706928  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:52.706935  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:52.706986  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:52.761039  992344 cri.go:89] found id: ""
	I0314 19:26:52.761077  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.761090  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:52.761099  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:52.761173  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:52.806217  992344 cri.go:89] found id: ""
	I0314 19:26:52.806251  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.806263  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:52.806271  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:52.806415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:52.848417  992344 cri.go:89] found id: ""
	I0314 19:26:52.848448  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.848457  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:52.848464  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:52.848527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:52.890639  992344 cri.go:89] found id: ""
	I0314 19:26:52.890674  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.890687  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:52.890695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:52.890775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:52.934637  992344 cri.go:89] found id: ""
	I0314 19:26:52.934666  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.934677  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:52.934690  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:52.934707  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:52.949797  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:52.949825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:53.033720  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:53.033751  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:53.033766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:53.113919  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:53.113960  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:53.163483  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:53.163525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:50.781924  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:52.788346  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.945712  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:54.442871  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.443456  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:53.908184  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.407851  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:55.718119  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:55.733183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:55.733276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:55.778015  992344 cri.go:89] found id: ""
	I0314 19:26:55.778042  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.778050  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:55.778057  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:55.778146  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:55.829955  992344 cri.go:89] found id: ""
	I0314 19:26:55.829996  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.830011  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:55.830019  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:55.830089  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:55.872198  992344 cri.go:89] found id: ""
	I0314 19:26:55.872247  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.872260  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:55.872268  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:55.872327  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:55.916604  992344 cri.go:89] found id: ""
	I0314 19:26:55.916637  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.916649  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:55.916657  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:55.916725  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:55.957028  992344 cri.go:89] found id: ""
	I0314 19:26:55.957051  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.957060  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:55.957065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:55.957118  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:55.996640  992344 cri.go:89] found id: ""
	I0314 19:26:55.996671  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.996684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:55.996695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:55.996750  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:56.036638  992344 cri.go:89] found id: ""
	I0314 19:26:56.036688  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.036701  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:56.036709  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:56.036777  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:56.072594  992344 cri.go:89] found id: ""
	I0314 19:26:56.072624  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.072633  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:56.072643  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:56.072657  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:56.129011  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:56.129044  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:56.143042  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:56.143075  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:56.232545  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:56.232574  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:56.232589  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:56.317471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:56.317517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:55.282413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:57.283169  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.445079  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:00.942781  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.908918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.409159  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.864325  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:58.879029  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:58.879108  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:58.918490  992344 cri.go:89] found id: ""
	I0314 19:26:58.918519  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.918526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:58.918533  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:58.918598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:58.963392  992344 cri.go:89] found id: ""
	I0314 19:26:58.963423  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.963431  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:58.963437  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:58.963502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:59.007104  992344 cri.go:89] found id: ""
	I0314 19:26:59.007146  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.007158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:59.007166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:59.007235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:59.050075  992344 cri.go:89] found id: ""
	I0314 19:26:59.050114  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.050127  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:59.050138  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:59.050204  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:59.090262  992344 cri.go:89] found id: ""
	I0314 19:26:59.090289  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.090298  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:59.090303  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:59.090355  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:59.130556  992344 cri.go:89] found id: ""
	I0314 19:26:59.130584  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.130592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:59.130598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:59.130659  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:59.170640  992344 cri.go:89] found id: ""
	I0314 19:26:59.170670  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.170680  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:59.170689  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:59.170769  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:59.206456  992344 cri.go:89] found id: ""
	I0314 19:26:59.206494  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.206503  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:59.206513  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:59.206533  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:59.285760  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:59.285781  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:59.285793  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:59.363143  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:59.363182  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:59.415614  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:59.415655  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:59.470619  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:59.470661  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:01.987397  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:02.004152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:02.004243  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:02.050022  992344 cri.go:89] found id: ""
	I0314 19:27:02.050056  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.050068  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:02.050075  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:02.050144  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:02.089639  992344 cri.go:89] found id: ""
	I0314 19:27:02.089666  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.089674  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:02.089680  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:02.089740  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:02.128368  992344 cri.go:89] found id: ""
	I0314 19:27:02.128400  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.128409  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:02.128415  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:02.128468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:02.165609  992344 cri.go:89] found id: ""
	I0314 19:27:02.165651  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.165664  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:02.165672  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:02.165745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:02.204317  992344 cri.go:89] found id: ""
	I0314 19:27:02.204347  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.204359  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:02.204367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:02.204436  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:02.247897  992344 cri.go:89] found id: ""
	I0314 19:27:02.247931  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.247943  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:02.247951  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:02.248025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:02.287938  992344 cri.go:89] found id: ""
	I0314 19:27:02.287967  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.287979  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:02.287985  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:02.288057  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:02.324712  992344 cri.go:89] found id: ""
	I0314 19:27:02.324739  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.324751  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:02.324762  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:02.324779  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:02.400908  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:02.400932  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:02.400953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:02.489797  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:02.489830  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:02.540134  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:02.540168  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:02.599093  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:02.599128  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:59.283757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.782785  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:02.946825  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.447529  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:03.906952  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.909530  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.115036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:05.130479  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:05.130562  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:05.174573  992344 cri.go:89] found id: ""
	I0314 19:27:05.174605  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.174617  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:05.174624  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:05.174689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:05.212508  992344 cri.go:89] found id: ""
	I0314 19:27:05.212535  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.212546  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:05.212554  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:05.212621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:05.250714  992344 cri.go:89] found id: ""
	I0314 19:27:05.250750  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.250762  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:05.250770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:05.250839  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:05.291691  992344 cri.go:89] found id: ""
	I0314 19:27:05.291714  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.291722  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:05.291728  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:05.291775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:05.332275  992344 cri.go:89] found id: ""
	I0314 19:27:05.332302  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.332311  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:05.332318  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:05.332384  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:05.370048  992344 cri.go:89] found id: ""
	I0314 19:27:05.370075  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.370084  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:05.370090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:05.370163  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:05.413797  992344 cri.go:89] found id: ""
	I0314 19:27:05.413825  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.413836  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:05.413844  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:05.413909  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:05.454295  992344 cri.go:89] found id: ""
	I0314 19:27:05.454321  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.454329  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:05.454341  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:05.454359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:05.509578  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:05.509614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:05.525317  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:05.525347  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:05.607550  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:05.607576  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:05.607593  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:05.690865  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:05.690904  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.233183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:08.249612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:08.249679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:08.298188  992344 cri.go:89] found id: ""
	I0314 19:27:08.298226  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.298238  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:08.298247  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:08.298310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:08.339102  992344 cri.go:89] found id: ""
	I0314 19:27:08.339132  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.339141  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:08.339148  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:08.339208  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:08.377029  992344 cri.go:89] found id: ""
	I0314 19:27:08.377060  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.377068  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:08.377074  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:08.377131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:08.414418  992344 cri.go:89] found id: ""
	I0314 19:27:08.414450  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.414461  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:08.414468  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:08.414528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:08.454027  992344 cri.go:89] found id: ""
	I0314 19:27:08.454057  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.454068  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:08.454076  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:08.454134  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:08.494818  992344 cri.go:89] found id: ""
	I0314 19:27:08.494847  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.494856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:08.494863  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:08.494927  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:08.534522  992344 cri.go:89] found id: ""
	I0314 19:27:08.534557  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.534567  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:08.534575  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:08.534637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:08.572164  992344 cri.go:89] found id: ""
	I0314 19:27:08.572197  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.572241  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:08.572257  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:08.572275  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:08.588223  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:08.588261  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:08.675851  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:08.675877  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:08.675889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:04.282689  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:06.284132  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.783530  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:07.448817  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:09.944024  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.407848  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:10.408924  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:12.907004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.763975  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:08.764014  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.813516  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:08.813552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.370525  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:11.385556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:11.385645  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:11.426788  992344 cri.go:89] found id: ""
	I0314 19:27:11.426823  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.426831  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:11.426837  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:11.426910  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:11.465752  992344 cri.go:89] found id: ""
	I0314 19:27:11.465786  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.465794  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:11.465801  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:11.465849  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:11.506855  992344 cri.go:89] found id: ""
	I0314 19:27:11.506890  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.506904  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:11.506912  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:11.506973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:11.548844  992344 cri.go:89] found id: ""
	I0314 19:27:11.548880  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.548891  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:11.548900  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:11.548960  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:11.590828  992344 cri.go:89] found id: ""
	I0314 19:27:11.590861  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.590872  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:11.590880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:11.590952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:11.631863  992344 cri.go:89] found id: ""
	I0314 19:27:11.631892  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.631904  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:11.631913  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:11.631975  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:11.670204  992344 cri.go:89] found id: ""
	I0314 19:27:11.670230  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.670238  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:11.670244  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:11.670293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:11.711946  992344 cri.go:89] found id: ""
	I0314 19:27:11.711980  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.711991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:11.712005  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:11.712026  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.766647  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:11.766682  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:11.784449  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:11.784475  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:11.866503  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:11.866536  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:11.866552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:11.952506  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:11.952538  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:10.787653  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:13.282424  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:11.945870  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.444491  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.909465  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.918004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.502903  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:14.518020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:14.518084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:14.555486  992344 cri.go:89] found id: ""
	I0314 19:27:14.555528  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.555541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:14.555552  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:14.555615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:14.592068  992344 cri.go:89] found id: ""
	I0314 19:27:14.592102  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.592113  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:14.592121  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:14.592186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:14.633353  992344 cri.go:89] found id: ""
	I0314 19:27:14.633408  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.633418  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:14.633425  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:14.633490  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:14.670897  992344 cri.go:89] found id: ""
	I0314 19:27:14.670935  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.670947  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:14.670955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:14.671024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:14.713838  992344 cri.go:89] found id: ""
	I0314 19:27:14.713874  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.713884  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:14.713890  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:14.713957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:14.751113  992344 cri.go:89] found id: ""
	I0314 19:27:14.751143  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.751151  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:14.751158  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:14.751209  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:14.792485  992344 cri.go:89] found id: ""
	I0314 19:27:14.792518  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.792535  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:14.792542  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:14.792606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:14.839250  992344 cri.go:89] found id: ""
	I0314 19:27:14.839284  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.839297  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:14.839309  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:14.839325  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:14.880384  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:14.880421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:14.941515  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:14.941549  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:14.958810  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:14.958836  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:15.048586  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:15.048610  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:15.048625  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:17.640280  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:17.655841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:17.655901  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:17.698205  992344 cri.go:89] found id: ""
	I0314 19:27:17.698242  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.698254  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:17.698261  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:17.698315  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:17.740854  992344 cri.go:89] found id: ""
	I0314 19:27:17.740892  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.740903  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:17.740910  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:17.740980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:17.783317  992344 cri.go:89] found id: ""
	I0314 19:27:17.783409  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.783426  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:17.783434  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:17.783499  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:17.823514  992344 cri.go:89] found id: ""
	I0314 19:27:17.823541  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.823550  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:17.823556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:17.823606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:17.859249  992344 cri.go:89] found id: ""
	I0314 19:27:17.859288  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.859301  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:17.859310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:17.859386  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:17.900636  992344 cri.go:89] found id: ""
	I0314 19:27:17.900670  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.900688  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:17.900703  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:17.900770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:17.939927  992344 cri.go:89] found id: ""
	I0314 19:27:17.939959  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.939970  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:17.939979  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:17.940048  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:17.980507  992344 cri.go:89] found id: ""
	I0314 19:27:17.980539  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.980551  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:17.980563  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:17.980580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:18.037887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:18.037925  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:18.054506  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:18.054544  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:18.129987  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:18.130006  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:18.130018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:18.210364  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:18.210400  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:15.282905  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:17.283421  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.943922  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.448400  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.406315  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.407142  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:20.758599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:20.775419  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:20.775480  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:20.814427  992344 cri.go:89] found id: ""
	I0314 19:27:20.814457  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.814469  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:20.814476  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:20.814528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:20.851020  992344 cri.go:89] found id: ""
	I0314 19:27:20.851056  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.851069  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:20.851077  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:20.851150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:20.894746  992344 cri.go:89] found id: ""
	I0314 19:27:20.894775  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.894784  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:20.894790  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:20.894856  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:20.932852  992344 cri.go:89] found id: ""
	I0314 19:27:20.932884  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.932895  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:20.932903  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:20.932962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:20.977294  992344 cri.go:89] found id: ""
	I0314 19:27:20.977329  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.977341  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:20.977349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:20.977417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:21.018980  992344 cri.go:89] found id: ""
	I0314 19:27:21.019016  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.019027  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:21.019036  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:21.019102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:21.058764  992344 cri.go:89] found id: ""
	I0314 19:27:21.058817  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.058832  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:21.058841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:21.058915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:21.098126  992344 cri.go:89] found id: ""
	I0314 19:27:21.098168  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.098181  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:21.098194  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:21.098211  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:21.154456  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:21.154490  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:21.170919  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:21.170950  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:21.247945  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:21.247973  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:21.247991  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:21.345152  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:21.345193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:19.782502  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.783005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.944293  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.945097  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.442970  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.907276  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:25.907425  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:27.907517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.900146  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:23.917834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:23.917896  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:23.959759  992344 cri.go:89] found id: ""
	I0314 19:27:23.959787  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.959800  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:23.959808  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:23.959875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:23.999841  992344 cri.go:89] found id: ""
	I0314 19:27:23.999871  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.999880  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:23.999887  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:23.999942  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:24.044031  992344 cri.go:89] found id: ""
	I0314 19:27:24.044063  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.044072  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:24.044078  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:24.044149  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:24.089895  992344 cri.go:89] found id: ""
	I0314 19:27:24.089931  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.089944  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:24.089955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:24.090023  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:24.131286  992344 cri.go:89] found id: ""
	I0314 19:27:24.131319  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.131331  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:24.131338  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:24.131409  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:24.169376  992344 cri.go:89] found id: ""
	I0314 19:27:24.169408  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.169420  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:24.169428  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:24.169495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:24.215123  992344 cri.go:89] found id: ""
	I0314 19:27:24.215150  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.215159  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:24.215165  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:24.215219  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:24.257440  992344 cri.go:89] found id: ""
	I0314 19:27:24.257476  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.257484  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:24.257494  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:24.257508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.311885  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:24.311916  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:24.326375  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:24.326403  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:24.403176  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:24.403207  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:24.403227  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:24.485890  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:24.485928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.032675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:27.050221  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:27.050310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:27.091708  992344 cri.go:89] found id: ""
	I0314 19:27:27.091739  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.091750  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:27.091761  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:27.091828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:27.135279  992344 cri.go:89] found id: ""
	I0314 19:27:27.135317  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.135329  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:27.135337  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:27.135407  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:27.178163  992344 cri.go:89] found id: ""
	I0314 19:27:27.178194  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.178203  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:27.178209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:27.178259  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:27.220298  992344 cri.go:89] found id: ""
	I0314 19:27:27.220331  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.220341  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:27.220367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:27.220423  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:27.262087  992344 cri.go:89] found id: ""
	I0314 19:27:27.262122  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.262135  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:27.262143  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:27.262305  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:27.304543  992344 cri.go:89] found id: ""
	I0314 19:27:27.304576  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.304587  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:27.304597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:27.304668  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:27.343860  992344 cri.go:89] found id: ""
	I0314 19:27:27.343889  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.343899  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:27.343905  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:27.343974  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:27.383608  992344 cri.go:89] found id: ""
	I0314 19:27:27.383639  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.383649  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:27.383659  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:27.383673  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:27.398443  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:27.398478  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:27.485215  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:27.485240  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:27.485254  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:27.564067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:27.564110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.608472  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:27.608511  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.284517  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.783079  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:28.443938  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.445579  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:29.908018  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.406717  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.169228  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:30.183876  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:30.183952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:30.229357  992344 cri.go:89] found id: ""
	I0314 19:27:30.229390  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.229401  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:30.229407  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:30.229474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:30.272970  992344 cri.go:89] found id: ""
	I0314 19:27:30.273007  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.273021  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:30.273030  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:30.273116  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:30.314939  992344 cri.go:89] found id: ""
	I0314 19:27:30.314968  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.314976  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:30.314982  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:30.315031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:30.350602  992344 cri.go:89] found id: ""
	I0314 19:27:30.350633  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.350644  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:30.350652  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:30.350739  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:30.393907  992344 cri.go:89] found id: ""
	I0314 19:27:30.393939  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.393950  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:30.393958  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:30.394029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:30.431943  992344 cri.go:89] found id: ""
	I0314 19:27:30.431974  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.431983  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:30.431991  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:30.432058  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:30.471873  992344 cri.go:89] found id: ""
	I0314 19:27:30.471900  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.471910  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:30.471918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:30.471981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:30.508842  992344 cri.go:89] found id: ""
	I0314 19:27:30.508865  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.508872  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:30.508882  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:30.508896  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:30.587441  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:30.587471  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:30.587489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:30.670580  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:30.670618  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:30.719846  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:30.719882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:30.779463  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:30.779508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.296251  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:33.311393  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:33.311452  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:33.351846  992344 cri.go:89] found id: ""
	I0314 19:27:33.351879  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.351889  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:33.351898  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:33.351965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:33.396392  992344 cri.go:89] found id: ""
	I0314 19:27:33.396494  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.396523  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:33.396546  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:33.396637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:33.451093  992344 cri.go:89] found id: ""
	I0314 19:27:33.451120  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.451130  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:33.451149  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:33.451225  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:33.511427  992344 cri.go:89] found id: ""
	I0314 19:27:33.511474  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.511487  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:33.511495  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:33.511570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:33.560459  992344 cri.go:89] found id: ""
	I0314 19:27:33.560488  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.560500  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:33.560509  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:33.560579  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:33.601454  992344 cri.go:89] found id: ""
	I0314 19:27:33.601491  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.601503  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:33.601512  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:33.601588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:33.640991  992344 cri.go:89] found id: ""
	I0314 19:27:33.641029  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.641042  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:33.641050  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:33.641115  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:33.684359  992344 cri.go:89] found id: ""
	I0314 19:27:33.684390  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.684398  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:33.684412  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:33.684436  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.699551  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:33.699583  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:27:29.285022  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:31.782413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:33.782490  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.942996  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.943285  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.407243  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.407509  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:27:33.781859  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:33.781893  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:33.781909  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:33.864992  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:33.865036  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:33.911670  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:33.911712  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.466570  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:36.483515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:36.483611  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:36.522488  992344 cri.go:89] found id: ""
	I0314 19:27:36.522521  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.522533  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:36.522549  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:36.522607  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:36.561676  992344 cri.go:89] found id: ""
	I0314 19:27:36.561714  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.561728  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:36.561737  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:36.561810  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:36.604512  992344 cri.go:89] found id: ""
	I0314 19:27:36.604547  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.604559  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:36.604568  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:36.604640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:36.645387  992344 cri.go:89] found id: ""
	I0314 19:27:36.645416  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.645425  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:36.645430  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:36.645495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:36.682951  992344 cri.go:89] found id: ""
	I0314 19:27:36.682976  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.682984  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:36.682989  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:36.683040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:36.725464  992344 cri.go:89] found id: ""
	I0314 19:27:36.725517  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.725530  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:36.725538  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:36.725601  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:36.766542  992344 cri.go:89] found id: ""
	I0314 19:27:36.766578  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.766590  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:36.766598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:36.766663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:36.809745  992344 cri.go:89] found id: ""
	I0314 19:27:36.809773  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.809782  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:36.809791  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:36.809805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.863035  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:36.863069  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:36.877162  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:36.877195  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:36.952727  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:36.952747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:36.952759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:37.035914  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:37.035953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:35.783567  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:37.786255  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.944521  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.445911  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:41.446549  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:38.409392  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:40.914692  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.581600  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:39.595798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:39.595875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:39.635374  992344 cri.go:89] found id: ""
	I0314 19:27:39.635406  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.635418  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:39.635426  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:39.635488  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:39.674527  992344 cri.go:89] found id: ""
	I0314 19:27:39.674560  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.674571  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:39.674579  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:39.674649  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:39.714313  992344 cri.go:89] found id: ""
	I0314 19:27:39.714357  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.714370  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:39.714380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:39.714449  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:39.754346  992344 cri.go:89] found id: ""
	I0314 19:27:39.754383  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.754395  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:39.754402  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:39.754468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:39.799448  992344 cri.go:89] found id: ""
	I0314 19:27:39.799481  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.799493  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:39.799500  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:39.799551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:39.841550  992344 cri.go:89] found id: ""
	I0314 19:27:39.841582  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.841592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:39.841601  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:39.841673  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:39.878581  992344 cri.go:89] found id: ""
	I0314 19:27:39.878612  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.878624  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:39.878630  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:39.878681  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:39.917419  992344 cri.go:89] found id: ""
	I0314 19:27:39.917444  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.917454  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:39.917465  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:39.917480  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:39.976304  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:39.976340  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:39.993786  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:39.993825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:40.074428  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.074458  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:40.074481  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:40.156135  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:40.156177  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:42.700758  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:42.716600  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:42.716672  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:42.763646  992344 cri.go:89] found id: ""
	I0314 19:27:42.763682  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.763694  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:42.763702  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:42.763770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:42.804246  992344 cri.go:89] found id: ""
	I0314 19:27:42.804280  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.804288  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:42.804295  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:42.804360  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:42.847415  992344 cri.go:89] found id: ""
	I0314 19:27:42.847445  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.847455  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:42.847463  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:42.847527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:42.884340  992344 cri.go:89] found id: ""
	I0314 19:27:42.884376  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.884386  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:42.884395  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:42.884464  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:42.923583  992344 cri.go:89] found id: ""
	I0314 19:27:42.923615  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.923634  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:42.923642  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:42.923704  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:42.969164  992344 cri.go:89] found id: ""
	I0314 19:27:42.969195  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.969207  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:42.969215  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:42.969291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:43.013760  992344 cri.go:89] found id: ""
	I0314 19:27:43.013793  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.013802  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:43.013808  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:43.013881  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:43.056930  992344 cri.go:89] found id: ""
	I0314 19:27:43.056964  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.056976  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:43.056989  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:43.057004  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:43.145067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:43.145104  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:43.196679  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:43.196714  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:43.252329  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:43.252363  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:43.268635  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:43.268663  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:43.353391  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.284010  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:42.784684  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.447800  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.943282  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.409130  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.908067  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.853793  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:45.867904  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:45.867971  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:45.909352  992344 cri.go:89] found id: ""
	I0314 19:27:45.909376  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.909387  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:45.909394  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:45.909451  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:45.950885  992344 cri.go:89] found id: ""
	I0314 19:27:45.950920  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.950931  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:45.950939  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:45.951006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:45.987907  992344 cri.go:89] found id: ""
	I0314 19:27:45.987940  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.987951  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:45.987959  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:45.988025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:46.026894  992344 cri.go:89] found id: ""
	I0314 19:27:46.026930  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.026942  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:46.026950  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:46.027047  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:46.074867  992344 cri.go:89] found id: ""
	I0314 19:27:46.074901  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.074911  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:46.074918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:46.074981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:46.111516  992344 cri.go:89] found id: ""
	I0314 19:27:46.111551  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.111562  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:46.111570  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:46.111633  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:46.151560  992344 cri.go:89] found id: ""
	I0314 19:27:46.151590  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.151601  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:46.151610  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:46.151674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:46.191684  992344 cri.go:89] found id: ""
	I0314 19:27:46.191719  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.191730  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:46.191742  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:46.191757  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:46.245152  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:46.245189  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:46.261705  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:46.261741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:46.342381  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:46.342409  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:46.342424  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:46.437995  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:46.438031  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:45.283412  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:47.782838  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.443371  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.446353  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.406887  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.408726  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.410088  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.981814  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:48.998620  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:48.998689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:49.040608  992344 cri.go:89] found id: ""
	I0314 19:27:49.040643  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.040653  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:49.040659  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:49.040711  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:49.083505  992344 cri.go:89] found id: ""
	I0314 19:27:49.083531  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.083539  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:49.083544  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:49.083606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:49.127355  992344 cri.go:89] found id: ""
	I0314 19:27:49.127383  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.127391  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:49.127399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:49.127472  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:49.165694  992344 cri.go:89] found id: ""
	I0314 19:27:49.165726  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.165738  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:49.165746  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:49.165813  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:49.209407  992344 cri.go:89] found id: ""
	I0314 19:27:49.209440  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.209449  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:49.209455  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:49.209516  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:49.250450  992344 cri.go:89] found id: ""
	I0314 19:27:49.250482  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.250493  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:49.250499  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:49.250560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:49.294041  992344 cri.go:89] found id: ""
	I0314 19:27:49.294070  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.294079  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:49.294085  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:49.294150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:49.333664  992344 cri.go:89] found id: ""
	I0314 19:27:49.333706  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.333719  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:49.333731  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:49.333749  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:49.348323  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:49.348351  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:49.428896  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:49.428917  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:49.428929  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:49.510395  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:49.510431  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:49.553630  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:49.553669  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.105763  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:52.120888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:52.120956  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:52.158143  992344 cri.go:89] found id: ""
	I0314 19:27:52.158174  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.158188  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:52.158196  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:52.158271  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:52.198254  992344 cri.go:89] found id: ""
	I0314 19:27:52.198285  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.198294  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:52.198299  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:52.198372  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:52.237973  992344 cri.go:89] found id: ""
	I0314 19:27:52.238001  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.238009  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:52.238015  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:52.238066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:52.283766  992344 cri.go:89] found id: ""
	I0314 19:27:52.283798  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.283809  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:52.283817  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:52.283889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:52.325861  992344 cri.go:89] found id: ""
	I0314 19:27:52.325896  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.325906  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:52.325914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:52.325983  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:52.367582  992344 cri.go:89] found id: ""
	I0314 19:27:52.367612  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.367622  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:52.367631  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:52.367698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:52.405009  992344 cri.go:89] found id: ""
	I0314 19:27:52.405043  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.405054  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:52.405062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:52.405125  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:52.447560  992344 cri.go:89] found id: ""
	I0314 19:27:52.447584  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.447594  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:52.447605  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:52.447620  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:52.519023  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:52.519048  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:52.519062  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:52.603256  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:52.603297  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:52.650926  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:52.650957  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.708743  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:52.708784  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:50.284259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.286540  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.944152  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.446257  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:54.910578  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.407194  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.225549  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:55.242914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:55.242992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:55.284249  992344 cri.go:89] found id: ""
	I0314 19:27:55.284280  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.284291  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:55.284298  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:55.284362  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:55.333784  992344 cri.go:89] found id: ""
	I0314 19:27:55.333821  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.333833  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:55.333840  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:55.333916  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:55.375444  992344 cri.go:89] found id: ""
	I0314 19:27:55.375498  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.375511  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:55.375519  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:55.375598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:55.416225  992344 cri.go:89] found id: ""
	I0314 19:27:55.416259  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.416269  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:55.416276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:55.416340  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:55.461097  992344 cri.go:89] found id: ""
	I0314 19:27:55.461138  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.461150  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:55.461166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:55.461235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:55.504621  992344 cri.go:89] found id: ""
	I0314 19:27:55.504659  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.504670  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:55.504679  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:55.504755  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:55.545075  992344 cri.go:89] found id: ""
	I0314 19:27:55.545111  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.545123  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:55.545130  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:55.545221  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:55.584137  992344 cri.go:89] found id: ""
	I0314 19:27:55.584197  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.584235  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:55.584252  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:55.584274  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:55.642705  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:55.642741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:55.657487  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:55.657516  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:55.738379  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:55.738414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:55.738432  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:55.827582  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:55.827621  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:58.374265  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:58.389764  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:58.389878  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:58.431760  992344 cri.go:89] found id: ""
	I0314 19:27:58.431798  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.431810  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:58.431818  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:58.431880  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:58.471389  992344 cri.go:89] found id: ""
	I0314 19:27:58.471415  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.471424  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:58.471430  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:58.471478  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:58.508875  992344 cri.go:89] found id: ""
	I0314 19:27:58.508903  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.508910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:58.508916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:58.508965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:58.546016  992344 cri.go:89] found id: ""
	I0314 19:27:58.546042  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.546051  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:58.546057  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:58.546106  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:58.586319  992344 cri.go:89] found id: ""
	I0314 19:27:58.586346  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.586354  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:58.586360  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:58.586414  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:58.625381  992344 cri.go:89] found id: ""
	I0314 19:27:58.625411  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.625423  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:58.625431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:58.625494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:58.663016  992344 cri.go:89] found id: ""
	I0314 19:27:58.663047  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.663059  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:58.663068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:58.663131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:58.703100  992344 cri.go:89] found id: ""
	I0314 19:27:58.703144  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.703159  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:58.703172  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:58.703190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:54.782936  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:56.783549  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.783602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.943515  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:00.443787  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:59.908025  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:01.908119  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.755081  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:58.755116  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:58.770547  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:58.770577  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:58.850354  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:58.850379  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:58.850395  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:58.944115  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:58.944152  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.489937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:01.505233  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:01.505309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:01.544381  992344 cri.go:89] found id: ""
	I0314 19:28:01.544417  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.544429  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:01.544437  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:01.544502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:01.582639  992344 cri.go:89] found id: ""
	I0314 19:28:01.582668  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.582676  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:01.582684  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:01.582745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:01.621926  992344 cri.go:89] found id: ""
	I0314 19:28:01.621957  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.621968  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:01.621976  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:01.622040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:01.659749  992344 cri.go:89] found id: ""
	I0314 19:28:01.659779  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.659791  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:01.659798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:01.659869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:01.696467  992344 cri.go:89] found id: ""
	I0314 19:28:01.696497  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.696505  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:01.696511  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:01.696570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:01.735273  992344 cri.go:89] found id: ""
	I0314 19:28:01.735301  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.735310  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:01.735316  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:01.735381  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:01.777051  992344 cri.go:89] found id: ""
	I0314 19:28:01.777081  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.777090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:01.777096  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:01.777155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:01.820851  992344 cri.go:89] found id: ""
	I0314 19:28:01.820883  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.820894  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:01.820911  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:01.820926  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:01.874599  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:01.874632  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:01.888971  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:01.889007  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:01.971786  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:01.971806  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:01.971819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:02.064070  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:02.064114  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.283565  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.782799  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:02.446196  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.944339  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.917838  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:06.409597  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.610064  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:04.625349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:04.625417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:04.664254  992344 cri.go:89] found id: ""
	I0314 19:28:04.664284  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.664293  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:04.664299  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:04.664348  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:04.704466  992344 cri.go:89] found id: ""
	I0314 19:28:04.704502  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.704514  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:04.704523  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:04.704588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:04.745733  992344 cri.go:89] found id: ""
	I0314 19:28:04.745762  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.745773  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:04.745781  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:04.745846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:04.790435  992344 cri.go:89] found id: ""
	I0314 19:28:04.790465  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.790477  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:04.790485  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:04.790550  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:04.829215  992344 cri.go:89] found id: ""
	I0314 19:28:04.829255  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.829268  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:04.829276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:04.829343  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:04.874200  992344 cri.go:89] found id: ""
	I0314 19:28:04.874234  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.874246  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:04.874253  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:04.874318  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:04.915882  992344 cri.go:89] found id: ""
	I0314 19:28:04.915909  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.915920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:04.915928  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:04.915994  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:04.954000  992344 cri.go:89] found id: ""
	I0314 19:28:04.954027  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.954038  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:04.954049  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:04.954063  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:04.996511  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:04.996540  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:05.049608  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:05.049644  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:05.064401  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:05.064437  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:05.145169  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:05.145189  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:05.145202  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:07.734535  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:07.765003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:07.765099  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:07.814489  992344 cri.go:89] found id: ""
	I0314 19:28:07.814518  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.814526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:07.814532  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:07.814595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:07.854337  992344 cri.go:89] found id: ""
	I0314 19:28:07.854368  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.854378  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:07.854384  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:07.854455  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:07.894430  992344 cri.go:89] found id: ""
	I0314 19:28:07.894465  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.894479  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:07.894487  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:07.894551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:07.939473  992344 cri.go:89] found id: ""
	I0314 19:28:07.939504  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.939515  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:07.939524  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:07.939591  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:07.982584  992344 cri.go:89] found id: ""
	I0314 19:28:07.982627  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.982640  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:07.982649  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:07.982710  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:08.020038  992344 cri.go:89] found id: ""
	I0314 19:28:08.020065  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.020074  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:08.020080  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:08.020138  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:08.058377  992344 cri.go:89] found id: ""
	I0314 19:28:08.058412  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.058423  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:08.058431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:08.058509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:08.096241  992344 cri.go:89] found id: ""
	I0314 19:28:08.096273  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.096284  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:08.096294  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:08.096308  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:08.174276  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:08.174315  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:08.221249  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:08.221282  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:08.273899  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:08.273930  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:08.290166  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:08.290193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:08.382154  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:06.283073  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.784385  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:07.447546  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:09.448168  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.906422  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.907030  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.882385  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:10.898126  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:10.898200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:10.939972  992344 cri.go:89] found id: ""
	I0314 19:28:10.940001  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.940012  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:10.940019  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:10.940084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:10.985154  992344 cri.go:89] found id: ""
	I0314 19:28:10.985187  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.985199  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:10.985212  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:10.985278  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:11.023955  992344 cri.go:89] found id: ""
	I0314 19:28:11.024004  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.024017  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:11.024025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:11.024094  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:11.065508  992344 cri.go:89] found id: ""
	I0314 19:28:11.065534  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.065543  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:11.065549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:11.065620  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:11.103903  992344 cri.go:89] found id: ""
	I0314 19:28:11.103930  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.103938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:11.103944  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:11.103997  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:11.146820  992344 cri.go:89] found id: ""
	I0314 19:28:11.146856  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.146866  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:11.146873  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:11.146930  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:11.195840  992344 cri.go:89] found id: ""
	I0314 19:28:11.195871  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.195880  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:11.195888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:11.195957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:11.237594  992344 cri.go:89] found id: ""
	I0314 19:28:11.237628  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.237647  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:11.237658  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:11.237671  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:11.297323  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:11.297356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:11.313785  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:11.313815  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:11.393416  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:11.393444  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:11.393461  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:11.472938  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:11.472972  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:11.283364  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.283657  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:11.945477  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.443000  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.406341  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:15.905918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.907047  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.025870  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:14.039597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:14.039667  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:14.076786  992344 cri.go:89] found id: ""
	I0314 19:28:14.076822  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.076834  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:14.076842  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:14.076911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:14.114754  992344 cri.go:89] found id: ""
	I0314 19:28:14.114796  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.114815  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:14.114823  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:14.114893  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:14.158360  992344 cri.go:89] found id: ""
	I0314 19:28:14.158396  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.158408  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:14.158417  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:14.158489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:14.208587  992344 cri.go:89] found id: ""
	I0314 19:28:14.208626  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.208638  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:14.208646  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:14.208712  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:14.253013  992344 cri.go:89] found id: ""
	I0314 19:28:14.253049  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.253062  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:14.253071  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:14.253142  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:14.313793  992344 cri.go:89] found id: ""
	I0314 19:28:14.313830  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.313843  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:14.313851  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:14.313918  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:14.352044  992344 cri.go:89] found id: ""
	I0314 19:28:14.352076  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.352087  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:14.352094  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:14.352161  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:14.389393  992344 cri.go:89] found id: ""
	I0314 19:28:14.389427  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.389436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:14.389446  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:14.389464  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:14.447873  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:14.447914  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:14.462610  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:14.462636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:14.543393  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:14.543414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:14.543427  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:14.628147  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:14.628190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.177617  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:17.193408  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:17.193481  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:17.233133  992344 cri.go:89] found id: ""
	I0314 19:28:17.233161  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.233170  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:17.233183  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:17.233252  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:17.270429  992344 cri.go:89] found id: ""
	I0314 19:28:17.270459  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.270471  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:17.270479  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:17.270559  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:17.309915  992344 cri.go:89] found id: ""
	I0314 19:28:17.309939  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.309947  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:17.309952  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:17.309999  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:17.347157  992344 cri.go:89] found id: ""
	I0314 19:28:17.347188  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.347199  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:17.347206  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:17.347269  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:17.388837  992344 cri.go:89] found id: ""
	I0314 19:28:17.388866  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.388877  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:17.388884  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:17.388948  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:17.425945  992344 cri.go:89] found id: ""
	I0314 19:28:17.425969  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.425977  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:17.425983  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:17.426051  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:17.470291  992344 cri.go:89] found id: ""
	I0314 19:28:17.470320  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.470356  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:17.470365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:17.470424  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:17.507512  992344 cri.go:89] found id: ""
	I0314 19:28:17.507541  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.507549  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:17.507559  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:17.507575  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.550148  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:17.550186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:17.603728  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:17.603759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:17.619160  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:17.619186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:17.699649  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:17.699683  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:17.699701  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:15.782780  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.783204  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:16.942941  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:18.943420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:21.450329  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.407873  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.905658  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.284486  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:20.300132  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:20.300198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:20.341566  992344 cri.go:89] found id: ""
	I0314 19:28:20.341608  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.341620  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:20.341629  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:20.341700  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:20.379527  992344 cri.go:89] found id: ""
	I0314 19:28:20.379555  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.379562  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:20.379568  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:20.379640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:20.425871  992344 cri.go:89] found id: ""
	I0314 19:28:20.425902  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.425910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:20.425916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:20.425980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:20.464939  992344 cri.go:89] found id: ""
	I0314 19:28:20.464979  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.464993  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:20.465003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:20.465075  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:20.500954  992344 cri.go:89] found id: ""
	I0314 19:28:20.500982  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.500993  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:20.501001  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:20.501063  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:20.542049  992344 cri.go:89] found id: ""
	I0314 19:28:20.542080  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.542090  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:20.542098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:20.542178  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:20.577298  992344 cri.go:89] found id: ""
	I0314 19:28:20.577325  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.577333  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:20.577340  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:20.577389  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.618467  992344 cri.go:89] found id: ""
	I0314 19:28:20.618498  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.618511  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:20.618523  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:20.618537  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:20.694238  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:20.694280  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:20.694298  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:20.778845  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:20.778882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:20.821575  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:20.821606  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:20.876025  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:20.876061  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.391129  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:23.408183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:23.408276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:23.449128  992344 cri.go:89] found id: ""
	I0314 19:28:23.449169  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.449180  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:23.449186  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:23.449276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:23.486168  992344 cri.go:89] found id: ""
	I0314 19:28:23.486201  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.486223  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:23.486242  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:23.486299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:23.525452  992344 cri.go:89] found id: ""
	I0314 19:28:23.525484  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.525492  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:23.525498  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:23.525553  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:23.560947  992344 cri.go:89] found id: ""
	I0314 19:28:23.560982  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.561037  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:23.561054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:23.561121  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:23.607261  992344 cri.go:89] found id: ""
	I0314 19:28:23.607298  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.607310  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:23.607317  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:23.607392  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:23.646849  992344 cri.go:89] found id: ""
	I0314 19:28:23.646881  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.646891  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:23.646896  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:23.646962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:23.684108  992344 cri.go:89] found id: ""
	I0314 19:28:23.684133  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.684140  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:23.684146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:23.684197  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.283546  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.783059  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.942845  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:25.943049  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:24.905817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.908404  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.723284  992344 cri.go:89] found id: ""
	I0314 19:28:23.723320  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.723331  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:23.723343  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:23.723359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:23.785024  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:23.785066  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.801136  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:23.801167  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:23.875721  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:23.875749  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:23.875766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:23.969377  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:23.969420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:26.517771  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:26.533260  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:26.533349  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:26.573712  992344 cri.go:89] found id: ""
	I0314 19:28:26.573750  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.573762  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:26.573770  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:26.573846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:26.610738  992344 cri.go:89] found id: ""
	I0314 19:28:26.610768  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.610777  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:26.610783  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:26.610836  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:26.652014  992344 cri.go:89] found id: ""
	I0314 19:28:26.652041  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.652049  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:26.652054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:26.652109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:26.687344  992344 cri.go:89] found id: ""
	I0314 19:28:26.687377  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.687389  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:26.687398  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:26.687466  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:26.725897  992344 cri.go:89] found id: ""
	I0314 19:28:26.725926  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.725938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:26.725945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:26.726008  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:26.772328  992344 cri.go:89] found id: ""
	I0314 19:28:26.772357  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.772367  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:26.772375  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:26.772440  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:26.814721  992344 cri.go:89] found id: ""
	I0314 19:28:26.814757  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.814768  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:26.814776  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:26.814841  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:26.849726  992344 cri.go:89] found id: ""
	I0314 19:28:26.849763  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.849781  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:26.849794  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:26.849811  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:26.932680  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:26.932709  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:26.932725  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:27.011721  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:27.011787  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:27.059121  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:27.059160  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:27.110392  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:27.110430  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:24.783160  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.783703  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:27.943562  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.943880  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.405973  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.407122  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.625784  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:29.642945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:29.643024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:29.681233  992344 cri.go:89] found id: ""
	I0314 19:28:29.681267  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.681279  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:29.681286  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:29.681351  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:29.729735  992344 cri.go:89] found id: ""
	I0314 19:28:29.729764  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.729773  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:29.729779  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:29.729835  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:29.773873  992344 cri.go:89] found id: ""
	I0314 19:28:29.773902  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.773911  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:29.773918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:29.773973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:29.815982  992344 cri.go:89] found id: ""
	I0314 19:28:29.816009  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.816019  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:29.816025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:29.816102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:29.855295  992344 cri.go:89] found id: ""
	I0314 19:28:29.855328  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.855343  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:29.855349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:29.855404  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:29.893580  992344 cri.go:89] found id: ""
	I0314 19:28:29.893618  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.893630  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:29.893638  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:29.893705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:29.939721  992344 cri.go:89] found id: ""
	I0314 19:28:29.939752  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.939763  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:29.939770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:29.939837  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:29.978277  992344 cri.go:89] found id: ""
	I0314 19:28:29.978315  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.978328  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:29.978347  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:29.978362  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:30.031723  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:30.031761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:30.046940  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:30.046968  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:30.124190  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:30.124226  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:30.124244  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:30.203448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:30.203488  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:32.756750  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:32.772599  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:32.772679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:32.812033  992344 cri.go:89] found id: ""
	I0314 19:28:32.812061  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.812069  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:32.812076  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:32.812165  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:32.855461  992344 cri.go:89] found id: ""
	I0314 19:28:32.855490  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.855501  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:32.855509  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:32.855575  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:32.900644  992344 cri.go:89] found id: ""
	I0314 19:28:32.900675  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.900686  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:32.900694  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:32.900772  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:32.942120  992344 cri.go:89] found id: ""
	I0314 19:28:32.942155  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.942166  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:32.942175  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:32.942238  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:32.981325  992344 cri.go:89] found id: ""
	I0314 19:28:32.981352  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.981360  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:32.981367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:32.981419  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:33.019732  992344 cri.go:89] found id: ""
	I0314 19:28:33.019767  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.019781  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:33.019789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:33.019852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:33.060205  992344 cri.go:89] found id: ""
	I0314 19:28:33.060262  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.060274  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:33.060283  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:33.060350  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:33.100456  992344 cri.go:89] found id: ""
	I0314 19:28:33.100490  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.100517  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:33.100529  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:33.100548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:33.114637  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:33.114668  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:33.186983  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:33.187010  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:33.187024  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:33.268816  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:33.268856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:33.314600  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:33.314634  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:29.282840  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.783516  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:32.443948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:34.942835  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:33.906912  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.908364  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.870832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:35.886088  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:35.886168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:35.929548  992344 cri.go:89] found id: ""
	I0314 19:28:35.929580  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.929590  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:35.929598  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:35.929675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:35.970315  992344 cri.go:89] found id: ""
	I0314 19:28:35.970351  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.970364  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:35.970372  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:35.970438  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:36.010663  992344 cri.go:89] found id: ""
	I0314 19:28:36.010696  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.010716  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:36.010723  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:36.010806  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:36.055521  992344 cri.go:89] found id: ""
	I0314 19:28:36.055558  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.055569  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:36.055578  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:36.055648  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:36.095768  992344 cri.go:89] found id: ""
	I0314 19:28:36.095799  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.095810  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:36.095821  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:36.095875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:36.132820  992344 cri.go:89] found id: ""
	I0314 19:28:36.132848  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.132856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:36.132861  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:36.132915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:36.173162  992344 cri.go:89] found id: ""
	I0314 19:28:36.173196  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.173209  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:36.173217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:36.173287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:36.211796  992344 cri.go:89] found id: ""
	I0314 19:28:36.211822  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.211830  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:36.211839  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:36.211854  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:36.271494  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:36.271536  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:36.289341  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:36.289366  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:36.368331  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:36.368361  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:36.368378  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:36.448945  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:36.448993  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:34.283005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.286755  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.781678  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.943412  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:39.450650  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.407910  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:40.409015  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:42.906420  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.995675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:39.009626  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:39.009705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:39.051085  992344 cri.go:89] found id: ""
	I0314 19:28:39.051119  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.051128  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:39.051134  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:39.051184  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:39.090167  992344 cri.go:89] found id: ""
	I0314 19:28:39.090201  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.090214  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:39.090221  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:39.090293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:39.129345  992344 cri.go:89] found id: ""
	I0314 19:28:39.129388  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.129404  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:39.129411  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:39.129475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:39.166678  992344 cri.go:89] found id: ""
	I0314 19:28:39.166731  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.166741  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:39.166750  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:39.166822  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:39.206329  992344 cri.go:89] found id: ""
	I0314 19:28:39.206368  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.206381  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:39.206389  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:39.206442  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:39.245158  992344 cri.go:89] found id: ""
	I0314 19:28:39.245187  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.245196  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:39.245202  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:39.245253  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:39.289207  992344 cri.go:89] found id: ""
	I0314 19:28:39.289243  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.289259  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:39.289267  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:39.289335  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:39.327437  992344 cri.go:89] found id: ""
	I0314 19:28:39.327462  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.327472  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:39.327484  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:39.327500  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:39.381681  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:39.381724  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:39.397060  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:39.397097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:39.482718  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:39.482744  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:39.482761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:39.566304  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:39.566349  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.111937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:42.126968  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:42.127033  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:42.168671  992344 cri.go:89] found id: ""
	I0314 19:28:42.168701  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.168713  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:42.168721  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:42.168792  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:42.213285  992344 cri.go:89] found id: ""
	I0314 19:28:42.213311  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.213319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:42.213325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:42.213388  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:42.255036  992344 cri.go:89] found id: ""
	I0314 19:28:42.255075  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.255085  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:42.255090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:42.255159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:42.296863  992344 cri.go:89] found id: ""
	I0314 19:28:42.296896  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.296907  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:42.296915  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:42.296978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:42.338346  992344 cri.go:89] found id: ""
	I0314 19:28:42.338402  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.338413  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:42.338421  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:42.338489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:42.374667  992344 cri.go:89] found id: ""
	I0314 19:28:42.374691  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.374699  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:42.374711  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:42.374774  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:42.412676  992344 cri.go:89] found id: ""
	I0314 19:28:42.412702  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.412713  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:42.412721  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:42.412786  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:42.451093  992344 cri.go:89] found id: ""
	I0314 19:28:42.451125  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.451135  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:42.451147  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:42.451162  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:42.531130  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:42.531176  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.576583  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:42.576623  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:42.633675  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:42.633715  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:42.650154  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:42.650188  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:42.731282  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:41.282876  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.283770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:41.942349  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.942831  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.943723  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:44.907134  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:46.907817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.231813  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:45.246939  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:45.247029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:45.289033  992344 cri.go:89] found id: ""
	I0314 19:28:45.289057  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.289066  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:45.289071  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:45.289128  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:45.327007  992344 cri.go:89] found id: ""
	I0314 19:28:45.327034  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.327043  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:45.327048  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:45.327109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:45.363725  992344 cri.go:89] found id: ""
	I0314 19:28:45.363757  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.363770  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:45.363778  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:45.363833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:45.400775  992344 cri.go:89] found id: ""
	I0314 19:28:45.400808  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.400819  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:45.400826  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:45.400887  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:45.438717  992344 cri.go:89] found id: ""
	I0314 19:28:45.438750  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.438762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:45.438770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:45.438833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:45.483296  992344 cri.go:89] found id: ""
	I0314 19:28:45.483334  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.483349  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:45.483355  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:45.483406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:45.519840  992344 cri.go:89] found id: ""
	I0314 19:28:45.519872  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.519881  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:45.519887  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:45.519939  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:45.560535  992344 cri.go:89] found id: ""
	I0314 19:28:45.560565  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.560577  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:45.560590  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:45.560613  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:45.639453  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:45.639476  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:45.639489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:45.724224  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:45.724265  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:45.768456  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:45.768494  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:45.828111  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:45.828154  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.345352  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:48.358823  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:48.358879  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:48.401545  992344 cri.go:89] found id: ""
	I0314 19:28:48.401575  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.401586  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:48.401595  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:48.401655  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:48.442031  992344 cri.go:89] found id: ""
	I0314 19:28:48.442062  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.442073  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:48.442081  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:48.442186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:48.481192  992344 cri.go:89] found id: ""
	I0314 19:28:48.481230  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.481239  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:48.481245  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:48.481309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:48.522127  992344 cri.go:89] found id: ""
	I0314 19:28:48.522162  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.522171  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:48.522177  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:48.522233  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:48.562763  992344 cri.go:89] found id: ""
	I0314 19:28:48.562791  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.562800  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:48.562806  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:48.562866  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:48.606256  992344 cri.go:89] found id: ""
	I0314 19:28:48.606290  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.606300  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:48.606309  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:48.606376  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:48.645493  992344 cri.go:89] found id: ""
	I0314 19:28:48.645527  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.645539  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:48.645547  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:48.645634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:48.686145  992344 cri.go:89] found id: ""
	I0314 19:28:48.686177  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.686189  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:48.686202  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:48.686229  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.701771  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:48.701812  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:28:45.784389  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.283921  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.443564  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.445062  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.909434  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.910456  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:28:48.783905  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:48.783931  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:48.783947  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:48.863824  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:48.863868  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:48.919421  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:48.919456  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:51.491562  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:51.507427  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:51.507494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:51.549290  992344 cri.go:89] found id: ""
	I0314 19:28:51.549325  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.549337  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:51.549344  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:51.549415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:51.587540  992344 cri.go:89] found id: ""
	I0314 19:28:51.587575  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.587588  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:51.587595  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:51.587663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:51.629187  992344 cri.go:89] found id: ""
	I0314 19:28:51.629221  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.629229  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:51.629235  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:51.629299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:51.670884  992344 cri.go:89] found id: ""
	I0314 19:28:51.670913  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.670921  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:51.670927  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:51.670978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:51.712751  992344 cri.go:89] found id: ""
	I0314 19:28:51.712783  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.712794  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:51.712802  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:51.712873  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:51.751462  992344 cri.go:89] found id: ""
	I0314 19:28:51.751490  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.751499  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:51.751505  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:51.751572  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:51.793049  992344 cri.go:89] found id: ""
	I0314 19:28:51.793079  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.793090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:51.793098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:51.793166  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:51.834793  992344 cri.go:89] found id: ""
	I0314 19:28:51.834825  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.834837  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:51.834850  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:51.834871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:51.851743  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:51.851792  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:51.927748  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:51.927768  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:51.927780  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:52.011674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:52.011718  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:52.067015  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:52.067059  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:50.783127  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.783450  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.942964  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:54.945540  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:53.407301  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:55.907357  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:56.900342  992056 pod_ready.go:81] duration metric: took 4m0.000959023s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	E0314 19:28:56.900373  992056 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:28:56.900392  992056 pod_ready.go:38] duration metric: took 4m15.050031566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:28:56.900431  992056 kubeadm.go:591] duration metric: took 4m22.457881244s to restartPrimaryControlPlane
	W0314 19:28:56.900513  992056 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:28:56.900549  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:28:54.623820  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:54.641380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:54.641459  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:54.699381  992344 cri.go:89] found id: ""
	I0314 19:28:54.699418  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.699430  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:54.699439  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:54.699507  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:54.752793  992344 cri.go:89] found id: ""
	I0314 19:28:54.752843  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.752865  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:54.752873  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:54.752980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:54.805116  992344 cri.go:89] found id: ""
	I0314 19:28:54.805148  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.805158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:54.805166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:54.805231  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:54.843303  992344 cri.go:89] found id: ""
	I0314 19:28:54.843336  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.843346  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:54.843352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:54.843406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:54.879789  992344 cri.go:89] found id: ""
	I0314 19:28:54.879822  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.879834  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:54.879840  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:54.879911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:54.921874  992344 cri.go:89] found id: ""
	I0314 19:28:54.921903  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.921913  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:54.921921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:54.922005  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:54.966098  992344 cri.go:89] found id: ""
	I0314 19:28:54.966129  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.966137  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:54.966146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:54.966201  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:55.005963  992344 cri.go:89] found id: ""
	I0314 19:28:55.005995  992344 logs.go:276] 0 containers: []
	W0314 19:28:55.006006  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:55.006019  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:55.006035  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:55.063802  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:55.063838  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:55.079126  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:55.079157  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:55.156174  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:55.156200  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:55.156241  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:55.237471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:55.237517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:57.786574  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:57.804359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:57.804446  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:57.843520  992344 cri.go:89] found id: ""
	I0314 19:28:57.843554  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.843566  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:57.843574  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:57.843642  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:57.883350  992344 cri.go:89] found id: ""
	I0314 19:28:57.883385  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.883398  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:57.883408  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:57.883502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:57.926544  992344 cri.go:89] found id: ""
	I0314 19:28:57.926578  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.926589  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:57.926597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:57.926674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:57.969832  992344 cri.go:89] found id: ""
	I0314 19:28:57.969861  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.969873  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:57.969880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:57.969951  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:58.021915  992344 cri.go:89] found id: ""
	I0314 19:28:58.021952  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.021964  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:58.021972  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:58.022043  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:58.068004  992344 cri.go:89] found id: ""
	I0314 19:28:58.068045  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.068059  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:58.068067  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:58.068147  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:58.109350  992344 cri.go:89] found id: ""
	I0314 19:28:58.109385  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.109397  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:58.109405  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:58.109474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:58.149505  992344 cri.go:89] found id: ""
	I0314 19:28:58.149600  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.149617  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:58.149631  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:58.149648  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:58.165051  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:58.165097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:58.260306  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:58.260334  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:58.260360  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:58.347229  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:58.347270  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:58.394506  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:58.394546  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:54.783620  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.282809  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.444954  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:59.450968  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.452967  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:00.965332  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:00.982169  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:29:00.982254  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:29:01.023125  992344 cri.go:89] found id: ""
	I0314 19:29:01.023161  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.023174  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:29:01.023182  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:29:01.023258  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:29:01.073622  992344 cri.go:89] found id: ""
	I0314 19:29:01.073663  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.073688  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:29:01.073697  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:29:01.073762  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:29:01.128431  992344 cri.go:89] found id: ""
	I0314 19:29:01.128459  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.128468  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:29:01.128474  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:29:01.128538  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:29:01.175167  992344 cri.go:89] found id: ""
	I0314 19:29:01.175196  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.175214  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:29:01.175222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:29:01.175287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:29:01.219999  992344 cri.go:89] found id: ""
	I0314 19:29:01.220030  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.220041  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:29:01.220049  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:29:01.220114  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:29:01.267917  992344 cri.go:89] found id: ""
	I0314 19:29:01.267946  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.267954  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:29:01.267961  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:29:01.268010  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:29:01.308402  992344 cri.go:89] found id: ""
	I0314 19:29:01.308437  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.308450  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:29:01.308457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:29:01.308527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:29:01.354953  992344 cri.go:89] found id: ""
	I0314 19:29:01.354982  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.354991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:29:01.355001  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:29:01.355016  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:29:01.409088  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:29:01.409131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:29:01.424936  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:29:01.424965  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:29:01.517636  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:29:01.517673  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:29:01.517691  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:29:01.632674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:29:01.632731  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:59.284185  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.783757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:03.943195  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:05.943902  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:04.185418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:04.199946  992344 kubeadm.go:591] duration metric: took 4m3.891459486s to restartPrimaryControlPlane
	W0314 19:29:04.200023  992344 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:04.200050  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:05.838695  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.638615727s)
	I0314 19:29:05.838799  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:05.858457  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:05.870547  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:05.881784  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:05.881805  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:05.881853  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:05.892847  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:05.892892  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:05.904430  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:05.914971  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:05.915037  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:05.925984  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.935559  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:05.935615  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.947405  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:05.958132  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:05.958177  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:05.968975  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:06.219425  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:04.283772  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:06.785802  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:07.950776  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:10.445766  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:09.282584  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:11.783655  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:12.942948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.944204  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.282089  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:16.282920  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:18.283071  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:17.446142  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:19.447118  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:21.447921  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:20.284298  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:22.782760  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:23.452826  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:25.944013  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:24.785109  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:27.282770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:28.443907  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:30.447194  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:29.271454  992056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.370871229s)
	I0314 19:29:29.271543  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:29.288947  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:29.299822  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:29.309955  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:29.309972  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:29.310004  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:29.320229  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:29.320285  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:29.331509  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:29.342985  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:29.343046  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:29.352805  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.363317  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:29.363376  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.374226  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:29.384400  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:29.384444  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:29.394962  992056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:29.631020  992056 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:29.283297  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:31.782029  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:33.783415  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:32.447974  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:34.943668  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:35.786587  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.282404  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.891396  992056 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:29:38.891457  992056 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:29:38.891550  992056 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:29:38.891703  992056 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:29:38.891857  992056 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:29:38.891965  992056 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:29:38.893298  992056 out.go:204]   - Generating certificates and keys ...
	I0314 19:29:38.893420  992056 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:29:38.893526  992056 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:29:38.893637  992056 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:29:38.893727  992056 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:29:38.893833  992056 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:29:38.893931  992056 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:29:38.894042  992056 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:29:38.894147  992056 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:29:38.894249  992056 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:29:38.894351  992056 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:29:38.894413  992056 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:29:38.894483  992056 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:29:38.894564  992056 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:29:38.894648  992056 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:29:38.894740  992056 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:29:38.894825  992056 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:29:38.894942  992056 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:29:38.895027  992056 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:29:38.896425  992056 out.go:204]   - Booting up control plane ...
	I0314 19:29:38.896545  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:29:38.896665  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:29:38.896773  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:29:38.896879  992056 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:29:38.896980  992056 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:29:38.897045  992056 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:29:38.897200  992056 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:29:38.897278  992056 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.504738 seconds
	I0314 19:29:38.897390  992056 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:29:38.897574  992056 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:29:38.897680  992056 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:29:38.897920  992056 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-992669 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:29:38.897993  992056 kubeadm.go:309] [bootstrap-token] Using token: wr0inu.l2vxagywmdawjzpm
	I0314 19:29:38.899387  992056 out.go:204]   - Configuring RBAC rules ...
	I0314 19:29:38.899518  992056 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:29:38.899597  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:29:38.899790  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:29:38.899950  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:29:38.900097  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:29:38.900225  992056 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:29:38.900389  992056 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:29:38.900449  992056 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:29:38.900514  992056 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:29:38.900523  992056 kubeadm.go:309] 
	I0314 19:29:38.900615  992056 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:29:38.900638  992056 kubeadm.go:309] 
	I0314 19:29:38.900743  992056 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:29:38.900753  992056 kubeadm.go:309] 
	I0314 19:29:38.900788  992056 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:29:38.900872  992056 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:29:38.900945  992056 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:29:38.900954  992056 kubeadm.go:309] 
	I0314 19:29:38.901031  992056 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:29:38.901042  992056 kubeadm.go:309] 
	I0314 19:29:38.901111  992056 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:29:38.901124  992056 kubeadm.go:309] 
	I0314 19:29:38.901202  992056 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:29:38.901312  992056 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:29:38.901433  992056 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:29:38.901446  992056 kubeadm.go:309] 
	I0314 19:29:38.901523  992056 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:29:38.901614  992056 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:29:38.901624  992056 kubeadm.go:309] 
	I0314 19:29:38.901743  992056 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.901842  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:29:38.901862  992056 kubeadm.go:309] 	--control-plane 
	I0314 19:29:38.901865  992056 kubeadm.go:309] 
	I0314 19:29:38.901933  992056 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:29:38.901943  992056 kubeadm.go:309] 
	I0314 19:29:38.902025  992056 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.902185  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:29:38.902212  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:29:38.902222  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:29:38.903643  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:29:36.944055  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.945642  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:39.437026  992563 pod_ready.go:81] duration metric: took 4m0.000967236s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	E0314 19:29:39.437057  992563 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:29:39.437072  992563 pod_ready.go:38] duration metric: took 4m7.55729252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:39.437098  992563 kubeadm.go:591] duration metric: took 4m15.521374831s to restartPrimaryControlPlane
	W0314 19:29:39.437168  992563 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:39.437200  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:38.904945  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:29:38.921860  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:29:38.958963  992056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:29:38.959064  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:38.959065  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-992669 minikube.k8s.io/updated_at=2024_03_14T19_29_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=embed-certs-992669 minikube.k8s.io/primary=true
	I0314 19:29:39.310627  992056 ops.go:34] apiserver oom_adj: -16
	I0314 19:29:39.310807  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:39.811730  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.311090  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.811674  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.311488  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.811640  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.310976  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.811336  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.283716  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:42.784841  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:43.311472  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:43.811668  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.311072  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.811108  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.311743  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.811197  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.311720  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.810955  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.311810  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.811633  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.282898  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:47.786855  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:48.310845  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:48.811747  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.310862  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.811100  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.311383  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.811660  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.311496  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.565143  992056 kubeadm.go:1106] duration metric: took 12.606155275s to wait for elevateKubeSystemPrivileges
	W0314 19:29:51.565200  992056 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:29:51.565210  992056 kubeadm.go:393] duration metric: took 5m17.173193727s to StartCluster
	I0314 19:29:51.565243  992056 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.565344  992056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:29:51.567430  992056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.567800  992056 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:29:51.570366  992056 out.go:177] * Verifying Kubernetes components...
	I0314 19:29:51.567870  992056 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:29:51.568004  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:29:51.571834  992056 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-992669"
	I0314 19:29:51.571847  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:29:51.571872  992056 addons.go:69] Setting metrics-server=true in profile "embed-certs-992669"
	I0314 19:29:51.571922  992056 addons.go:234] Setting addon metrics-server=true in "embed-certs-992669"
	W0314 19:29:51.571942  992056 addons.go:243] addon metrics-server should already be in state true
	I0314 19:29:51.571981  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571884  992056 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-992669"
	W0314 19:29:51.572025  992056 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:29:51.572056  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571842  992056 addons.go:69] Setting default-storageclass=true in profile "embed-certs-992669"
	I0314 19:29:51.572143  992056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-992669"
	I0314 19:29:51.572563  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572578  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572597  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572567  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572611  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572665  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.595116  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0314 19:29:51.595142  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0314 19:29:51.595156  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
	I0314 19:29:51.595736  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595837  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595892  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.596363  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596382  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596560  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596545  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596788  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.596895  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597022  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597213  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.597463  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597488  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597536  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.597498  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.601587  992056 addons.go:234] Setting addon default-storageclass=true in "embed-certs-992669"
	W0314 19:29:51.601612  992056 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:29:51.601644  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.602034  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.602069  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.613696  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I0314 19:29:51.614277  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.614924  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.614957  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.615340  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.615518  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.616192  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0314 19:29:51.616643  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.617453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.617661  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.617680  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.619738  992056 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:29:51.618228  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.621267  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:29:51.621284  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:29:51.621299  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.619984  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.622057  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I0314 19:29:51.622533  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.623169  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.623184  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.623511  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.623600  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.625179  992056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:29:51.625183  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.627022  992056 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:51.627052  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:29:51.627074  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.624457  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.627169  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.625872  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.627276  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.626052  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.627505  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.628272  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.628593  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.630213  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630764  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.630788  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630870  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.631065  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.631483  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.631681  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.645022  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0314 19:29:51.645562  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.646147  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.646172  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.646551  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.646766  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.648424  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.648674  992056 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:51.648690  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:29:51.648702  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.651513  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652188  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.652197  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.652220  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652395  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.652552  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.652655  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.845568  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:29:51.865551  992056 node_ready.go:35] waiting up to 6m0s for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875093  992056 node_ready.go:49] node "embed-certs-992669" has status "Ready":"True"
	I0314 19:29:51.875111  992056 node_ready.go:38] duration metric: took 9.53464ms for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875123  992056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:51.883535  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:51.979907  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:52.034281  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:29:52.034312  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:29:52.060831  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:52.124847  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:29:52.124885  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:29:52.289209  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:52.289239  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:29:52.374833  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:50.286539  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:52.298408  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:53.393013  992056 pod_ready.go:92] pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.393048  992056 pod_ready.go:81] duration metric: took 1.509482935s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.393060  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401449  992056 pod_ready.go:92] pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.401476  992056 pod_ready.go:81] duration metric: took 8.407286ms for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401486  992056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406465  992056 pod_ready.go:92] pod "etcd-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.406492  992056 pod_ready.go:81] duration metric: took 4.997468ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406502  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412923  992056 pod_ready.go:92] pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.412954  992056 pod_ready.go:81] duration metric: took 6.441869ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412966  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469519  992056 pod_ready.go:92] pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.469552  992056 pod_ready.go:81] duration metric: took 56.57628ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469566  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.582001  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.602041099s)
	I0314 19:29:53.582078  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582096  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.582484  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582500  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582521  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582795  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582813  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582853  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.590184  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.590202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.590451  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.590487  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.886717  992056 pod_ready.go:92] pod "kube-proxy-hzhsp" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.886741  992056 pod_ready.go:81] duration metric: took 417.167569ms for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.886751  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.965815  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.904943117s)
	I0314 19:29:53.965875  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.965887  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.966214  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.966240  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.966239  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.966249  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.966305  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.967958  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.968169  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.968187  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.996956  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.622074464s)
	I0314 19:29:53.997019  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997033  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997356  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.997378  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997400  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997415  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997427  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997740  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997758  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997771  992056 addons.go:470] Verifying addon metrics-server=true in "embed-certs-992669"
	I0314 19:29:53.999390  992056 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:29:54.000743  992056 addons.go:505] duration metric: took 2.432877042s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:29:54.270407  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:54.270432  992056 pod_ready.go:81] duration metric: took 383.674695ms for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:54.270440  992056 pod_ready.go:38] duration metric: took 2.395303637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:54.270455  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:29:54.270521  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:54.293083  992056 api_server.go:72] duration metric: took 2.725234796s to wait for apiserver process to appear ...
	I0314 19:29:54.293113  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:29:54.293164  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:29:54.302466  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:29:54.304317  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:29:54.304342  992056 api_server.go:131] duration metric: took 11.220873ms to wait for apiserver health ...
	I0314 19:29:54.304353  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:29:54.479241  992056 system_pods.go:59] 9 kube-system pods found
	I0314 19:29:54.479276  992056 system_pods.go:61] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.479282  992056 system_pods.go:61] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.479288  992056 system_pods.go:61] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.479294  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.479299  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.479305  992056 system_pods.go:61] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.479310  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.479318  992056 system_pods.go:61] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.479325  992056 system_pods.go:61] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.479340  992056 system_pods.go:74] duration metric: took 174.978725ms to wait for pod list to return data ...
	I0314 19:29:54.479358  992056 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:29:54.668682  992056 default_sa.go:45] found service account: "default"
	I0314 19:29:54.668714  992056 default_sa.go:55] duration metric: took 189.346747ms for default service account to be created ...
	I0314 19:29:54.668727  992056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:29:54.873128  992056 system_pods.go:86] 9 kube-system pods found
	I0314 19:29:54.873161  992056 system_pods.go:89] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.873169  992056 system_pods.go:89] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.873175  992056 system_pods.go:89] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.873184  992056 system_pods.go:89] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.873189  992056 system_pods.go:89] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.873194  992056 system_pods.go:89] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.873199  992056 system_pods.go:89] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.873211  992056 system_pods.go:89] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.873222  992056 system_pods.go:89] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.873244  992056 system_pods.go:126] duration metric: took 204.509108ms to wait for k8s-apps to be running ...
	I0314 19:29:54.873256  992056 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:29:54.873311  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:54.890288  992056 system_svc.go:56] duration metric: took 17.021036ms WaitForService to wait for kubelet
	I0314 19:29:54.890320  992056 kubeadm.go:576] duration metric: took 3.322477642s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:29:54.890347  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:29:55.069429  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:29:55.069458  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:29:55.069506  992056 node_conditions.go:105] duration metric: took 179.148222ms to run NodePressure ...
	I0314 19:29:55.069521  992056 start.go:240] waiting for startup goroutines ...
	I0314 19:29:55.069529  992056 start.go:245] waiting for cluster config update ...
	I0314 19:29:55.069543  992056 start.go:254] writing updated cluster config ...
	I0314 19:29:55.069881  992056 ssh_runner.go:195] Run: rm -f paused
	I0314 19:29:55.129829  992056 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:29:55.131816  992056 out.go:177] * Done! kubectl is now configured to use "embed-certs-992669" cluster and "default" namespace by default
	I0314 19:29:54.784171  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:57.281882  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:59.282486  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:01.282694  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:03.782088  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:05.785281  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:08.282878  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:10.782495  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:12.785319  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:11.911432  992563 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.474198952s)
	I0314 19:30:11.911536  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:11.930130  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:30:11.942380  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:30:11.954695  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:30:11.954724  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:30:11.954795  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:30:11.966696  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:30:11.966772  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:30:11.980074  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:30:11.991635  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:30:11.991728  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:30:12.004984  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.016196  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:30:12.016271  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.027974  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:30:12.039057  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:30:12.039110  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:30:12.050231  992563 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:30:12.272978  992563 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:30:15.284464  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:16.784336  991880 pod_ready.go:81] duration metric: took 4m0.008931629s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:30:16.784369  991880 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 19:30:16.784378  991880 pod_ready.go:38] duration metric: took 4m4.558023355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:16.784398  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:16.784436  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:16.784511  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:16.853550  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:16.853582  991880 cri.go:89] found id: ""
	I0314 19:30:16.853592  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:16.853657  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.858963  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:16.859036  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:16.920573  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:16.920607  991880 cri.go:89] found id: ""
	I0314 19:30:16.920618  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:16.920686  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.926133  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:16.926193  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:16.972150  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:16.972184  991880 cri.go:89] found id: ""
	I0314 19:30:16.972192  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:16.972276  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.979169  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:16.979247  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:17.028161  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:17.028191  991880 cri.go:89] found id: ""
	I0314 19:30:17.028202  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:17.028290  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.034573  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:17.034644  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:17.081031  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.081057  991880 cri.go:89] found id: ""
	I0314 19:30:17.081067  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:17.081132  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.086182  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:17.086254  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:17.124758  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.124793  991880 cri.go:89] found id: ""
	I0314 19:30:17.124804  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:17.124892  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.130576  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:17.130636  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:17.180055  991880 cri.go:89] found id: ""
	I0314 19:30:17.180088  991880 logs.go:276] 0 containers: []
	W0314 19:30:17.180100  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:17.180107  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:17.180174  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:17.227751  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.227785  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.227790  991880 cri.go:89] found id: ""
	I0314 19:30:17.227800  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:17.227859  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.232614  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.237357  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:17.237385  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.300841  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:17.300884  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.363775  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:17.363812  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.419276  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:17.419328  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.461722  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:17.461764  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:17.519105  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:17.519147  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:17.535065  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:17.535099  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:17.588603  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:17.588642  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:17.641770  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:17.641803  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:18.180497  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:18.180561  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:18.250700  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:18.250736  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:18.422627  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:18.422668  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:18.484021  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:18.484059  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.765051  992563 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:30:21.765146  992563 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:30:21.765261  992563 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:30:21.765420  992563 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:30:21.765550  992563 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:30:21.765636  992563 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:30:21.767199  992563 out.go:204]   - Generating certificates and keys ...
	I0314 19:30:21.767291  992563 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:30:21.767371  992563 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:30:21.767473  992563 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:30:21.767548  992563 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:30:21.767636  992563 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:30:21.767703  992563 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:30:21.767787  992563 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:30:21.767864  992563 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:30:21.767957  992563 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:30:21.768051  992563 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:30:21.768098  992563 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:30:21.768170  992563 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:30:21.768260  992563 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:30:21.768327  992563 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:30:21.768407  992563 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:30:21.768487  992563 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:30:21.768592  992563 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:30:21.768685  992563 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:30:21.769989  992563 out.go:204]   - Booting up control plane ...
	I0314 19:30:21.770111  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:30:21.770213  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:30:21.770295  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:30:21.770428  992563 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:30:21.770580  992563 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:30:21.770654  992563 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:30:21.770844  992563 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:30:21.770958  992563 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503303 seconds
	I0314 19:30:21.771087  992563 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:30:21.771238  992563 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:30:21.771320  992563 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:30:21.771547  992563 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-440341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:30:21.771634  992563 kubeadm.go:309] [bootstrap-token] Using token: tk83yg.fwvaicx2eo9i68ac
	I0314 19:30:21.773288  992563 out.go:204]   - Configuring RBAC rules ...
	I0314 19:30:21.773428  992563 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:30:21.773532  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:30:21.773732  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:30:21.773914  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:30:21.774068  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:30:21.774180  992563 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:30:21.774338  992563 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:30:21.774402  992563 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:30:21.774464  992563 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:30:21.774472  992563 kubeadm.go:309] 
	I0314 19:30:21.774577  992563 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:30:21.774601  992563 kubeadm.go:309] 
	I0314 19:30:21.774705  992563 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:30:21.774715  992563 kubeadm.go:309] 
	I0314 19:30:21.774744  992563 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:30:21.774833  992563 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:30:21.774914  992563 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:30:21.774930  992563 kubeadm.go:309] 
	I0314 19:30:21.775008  992563 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:30:21.775033  992563 kubeadm.go:309] 
	I0314 19:30:21.775102  992563 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:30:21.775112  992563 kubeadm.go:309] 
	I0314 19:30:21.775191  992563 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:30:21.775311  992563 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:30:21.775407  992563 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:30:21.775416  992563 kubeadm.go:309] 
	I0314 19:30:21.775537  992563 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:30:21.775654  992563 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:30:21.775664  992563 kubeadm.go:309] 
	I0314 19:30:21.775774  992563 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.775940  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:30:21.775971  992563 kubeadm.go:309] 	--control-plane 
	I0314 19:30:21.775977  992563 kubeadm.go:309] 
	I0314 19:30:21.776088  992563 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:30:21.776096  992563 kubeadm.go:309] 
	I0314 19:30:21.776235  992563 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.776419  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:30:21.776441  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:30:21.776451  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:30:21.778042  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:30:21.037583  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:21.055977  991880 api_server.go:72] duration metric: took 4m16.560286182s to wait for apiserver process to appear ...
	I0314 19:30:21.056002  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:21.056039  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:21.056088  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:21.101556  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:21.101583  991880 cri.go:89] found id: ""
	I0314 19:30:21.101591  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:21.101640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.107192  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:21.107259  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:21.156580  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:21.156608  991880 cri.go:89] found id: ""
	I0314 19:30:21.156619  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:21.156681  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.162119  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:21.162277  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:21.204270  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:21.204295  991880 cri.go:89] found id: ""
	I0314 19:30:21.204304  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:21.204369  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.208987  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:21.209057  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:21.258998  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.259019  991880 cri.go:89] found id: ""
	I0314 19:30:21.259029  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:21.259094  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.264179  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:21.264264  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:21.314180  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.314213  991880 cri.go:89] found id: ""
	I0314 19:30:21.314225  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:21.314293  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.319693  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:21.319758  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:21.364936  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.364974  991880 cri.go:89] found id: ""
	I0314 19:30:21.364987  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:21.365061  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.370463  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:21.370531  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:21.411930  991880 cri.go:89] found id: ""
	I0314 19:30:21.411963  991880 logs.go:276] 0 containers: []
	W0314 19:30:21.411974  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:21.411982  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:21.412053  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:21.467849  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:21.467875  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.467881  991880 cri.go:89] found id: ""
	I0314 19:30:21.467891  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:21.467954  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.474463  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.480322  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:21.480351  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.532746  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:21.532778  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.599065  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:21.599115  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.655522  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:21.655563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:22.097480  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:22.097521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:22.154520  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:22.154563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:22.175274  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:22.175312  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:22.302831  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:22.302865  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:22.353974  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:22.354017  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:22.392220  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:22.392263  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:22.433863  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:22.433893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:22.474014  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:22.474047  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:22.522023  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:22.522056  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:21.779300  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:30:21.841175  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:30:21.937053  992563 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:30:21.937114  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:21.937131  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-440341 minikube.k8s.io/updated_at=2024_03_14T19_30_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=default-k8s-diff-port-440341 minikube.k8s.io/primary=true
	I0314 19:30:22.169862  992563 ops.go:34] apiserver oom_adj: -16
	I0314 19:30:22.169890  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:22.670591  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.170361  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.670786  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.170313  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.670779  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.169961  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.670821  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:26.170263  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.081288  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:30:25.086127  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:30:25.087542  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:30:25.087569  991880 api_server.go:131] duration metric: took 4.031556019s to wait for apiserver health ...
	I0314 19:30:25.087578  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:25.087598  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:25.087646  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:25.136716  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.136743  991880 cri.go:89] found id: ""
	I0314 19:30:25.136754  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:25.136818  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.142319  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:25.142382  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:25.188007  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.188031  991880 cri.go:89] found id: ""
	I0314 19:30:25.188040  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:25.188098  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.192982  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:25.193056  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:25.235462  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.235485  991880 cri.go:89] found id: ""
	I0314 19:30:25.235493  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:25.235543  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.239980  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:25.240048  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:25.288524  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.288545  991880 cri.go:89] found id: ""
	I0314 19:30:25.288554  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:25.288604  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.294625  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:25.294680  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:25.332862  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:25.332884  991880 cri.go:89] found id: ""
	I0314 19:30:25.332891  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:25.332949  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.337918  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:25.337993  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:25.379537  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:25.379569  991880 cri.go:89] found id: ""
	I0314 19:30:25.379578  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:25.379640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.385396  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:25.385471  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:25.426549  991880 cri.go:89] found id: ""
	I0314 19:30:25.426584  991880 logs.go:276] 0 containers: []
	W0314 19:30:25.426596  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:25.426603  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:25.426676  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:25.468021  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:25.468048  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.468054  991880 cri.go:89] found id: ""
	I0314 19:30:25.468065  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:25.468134  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.473277  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.477669  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:25.477690  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:25.530477  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:25.530521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.586949  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:25.586985  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.629933  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:25.629972  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.675919  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:25.675955  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.724439  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:25.724477  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:25.790827  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:25.790864  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:25.808176  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:25.808223  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:25.925583  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:25.925621  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.972184  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:25.972237  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:26.018051  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:26.018083  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:26.080100  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:26.080141  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:26.117235  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:26.117276  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:29.005596  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:30:29.005628  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.005632  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.005636  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.005639  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.005642  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.005645  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.005651  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.005657  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.005665  991880 system_pods.go:74] duration metric: took 3.918081505s to wait for pod list to return data ...
	I0314 19:30:29.005672  991880 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:29.008145  991880 default_sa.go:45] found service account: "default"
	I0314 19:30:29.008172  991880 default_sa.go:55] duration metric: took 2.493629ms for default service account to be created ...
	I0314 19:30:29.008181  991880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:29.013603  991880 system_pods.go:86] 8 kube-system pods found
	I0314 19:30:29.013629  991880 system_pods.go:89] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.013641  991880 system_pods.go:89] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.013646  991880 system_pods.go:89] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.013650  991880 system_pods.go:89] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.013654  991880 system_pods.go:89] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.013658  991880 system_pods.go:89] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.013665  991880 system_pods.go:89] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.013673  991880 system_pods.go:89] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.013683  991880 system_pods.go:126] duration metric: took 5.49627ms to wait for k8s-apps to be running ...
	I0314 19:30:29.013692  991880 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:29.013744  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:29.033211  991880 system_svc.go:56] duration metric: took 19.509127ms WaitForService to wait for kubelet
	I0314 19:30:29.033244  991880 kubeadm.go:576] duration metric: took 4m24.537554048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:29.033262  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:29.036387  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:29.036409  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:29.036419  991880 node_conditions.go:105] duration metric: took 3.152496ms to run NodePressure ...
	I0314 19:30:29.036432  991880 start.go:240] waiting for startup goroutines ...
	I0314 19:30:29.036441  991880 start.go:245] waiting for cluster config update ...
	I0314 19:30:29.036455  991880 start.go:254] writing updated cluster config ...
	I0314 19:30:29.036755  991880 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:29.086638  991880 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 19:30:29.088767  991880 out.go:177] * Done! kubectl is now configured to use "no-preload-731976" cluster and "default" namespace by default
	I0314 19:30:26.670634  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.170774  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.670460  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.170571  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.670199  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.170324  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.670849  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.170021  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.670974  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.170929  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.670790  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.170127  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.670598  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.170188  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.670057  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.800524  992563 kubeadm.go:1106] duration metric: took 11.863480183s to wait for elevateKubeSystemPrivileges
	W0314 19:30:33.800567  992563 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:30:33.800577  992563 kubeadm.go:393] duration metric: took 5m9.94050972s to StartCluster
	I0314 19:30:33.800600  992563 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.800688  992563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:30:33.802311  992563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.802593  992563 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:30:33.804369  992563 out.go:177] * Verifying Kubernetes components...
	I0314 19:30:33.802658  992563 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:30:33.802827  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:30:33.806017  992563 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806030  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:30:33.806039  992563 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806035  992563 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806064  992563 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-440341"
	I0314 19:30:33.806070  992563 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.806078  992563 addons.go:243] addon storage-provisioner should already be in state true
	W0314 19:30:33.806079  992563 addons.go:243] addon metrics-server should already be in state true
	I0314 19:30:33.806077  992563 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-440341"
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806494  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806502  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806518  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806535  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806588  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806621  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.822764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0314 19:30:33.823247  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.823804  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.823832  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.824297  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.824872  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.824921  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.826625  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0314 19:30:33.826764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0314 19:30:33.827172  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827235  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827776  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827802  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.827915  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827936  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.828247  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.828442  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.829152  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.829934  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.829979  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.832093  992563 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.832117  992563 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:30:33.832150  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.832523  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.832567  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.847051  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I0314 19:30:33.847665  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.848345  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.848364  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.848731  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.848903  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.850545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.852435  992563 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:30:33.851181  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I0314 19:30:33.852606  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44445
	I0314 19:30:33.853975  992563 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:33.853991  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:30:33.854010  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.852808  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854344  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854857  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.854878  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855189  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.855207  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855286  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855613  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855925  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.856281  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.856302  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.857423  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.857734  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.859391  992563 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:30:33.858135  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.858383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.860539  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:30:33.860554  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:30:33.860561  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.860569  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.860651  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.860790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.860935  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.862967  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863319  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.863339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863428  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.863627  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.863738  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.863908  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.880826  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I0314 19:30:33.881345  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.881799  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.881818  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.882187  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.882341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.884263  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.884589  992563 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:33.884607  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:30:33.884625  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.887557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.887921  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.887945  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.888190  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.888503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.888670  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.888773  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:34.034473  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:30:34.090558  992563 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129103  992563 node_ready.go:49] node "default-k8s-diff-port-440341" has status "Ready":"True"
	I0314 19:30:34.129135  992563 node_ready.go:38] duration metric: took 38.535795ms for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129148  992563 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:34.137612  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:34.186085  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:30:34.186105  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:30:34.218932  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:34.220858  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:30:34.220881  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:30:34.235535  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:34.356161  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:34.356196  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:30:34.486555  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:36.162952  992563 pod_ready.go:92] pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.162991  992563 pod_ready.go:81] duration metric: took 2.025345367s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.163005  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171143  992563 pod_ready.go:92] pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.171227  992563 pod_ready.go:81] duration metric: took 8.211826ms for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171254  992563 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182222  992563 pod_ready.go:92] pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.182246  992563 pod_ready.go:81] duration metric: took 10.963779ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182255  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196349  992563 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.196375  992563 pod_ready.go:81] duration metric: took 14.113911ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196385  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201427  992563 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.201448  992563 pod_ready.go:81] duration metric: took 5.056279ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201456  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.470967  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.235390903s)
	I0314 19:30:36.471092  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471113  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471179  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.252178888s)
	I0314 19:30:36.471229  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471250  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471529  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471565  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471565  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471576  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471583  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471605  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471626  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471589  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471854  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471876  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.472161  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.472167  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.472186  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.491529  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.491557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.491867  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.491887  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.546393  992563 pod_ready.go:92] pod "kube-proxy-h7hdc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.546418  992563 pod_ready.go:81] duration metric: took 344.955471ms for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.546427  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.619091  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.132488028s)
	I0314 19:30:36.619147  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619165  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619443  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619459  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619469  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619477  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619809  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619839  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619847  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.619851  992563 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-440341"
	I0314 19:30:36.621595  992563 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 19:30:36.622935  992563 addons.go:505] duration metric: took 2.820276683s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 19:30:36.950079  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.950112  992563 pod_ready.go:81] duration metric: took 403.67651ms for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.950124  992563 pod_ready.go:38] duration metric: took 2.820962547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:36.950145  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:36.950212  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:37.024696  992563 api_server.go:72] duration metric: took 3.222061457s to wait for apiserver process to appear ...
	I0314 19:30:37.024728  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:37.024754  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:30:37.031369  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:30:37.033114  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:30:37.033137  992563 api_server.go:131] duration metric: took 8.40225ms to wait for apiserver health ...
	I0314 19:30:37.033145  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:37.157219  992563 system_pods.go:59] 9 kube-system pods found
	I0314 19:30:37.157256  992563 system_pods.go:61] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.157263  992563 system_pods.go:61] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.157269  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.157276  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.157282  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.157286  992563 system_pods.go:61] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.157291  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.157300  992563 system_pods.go:61] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.157308  992563 system_pods.go:61] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.157323  992563 system_pods.go:74] duration metric: took 124.170301ms to wait for pod list to return data ...
	I0314 19:30:37.157336  992563 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:37.343573  992563 default_sa.go:45] found service account: "default"
	I0314 19:30:37.343602  992563 default_sa.go:55] duration metric: took 186.253477ms for default service account to be created ...
	I0314 19:30:37.343620  992563 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:37.549907  992563 system_pods.go:86] 9 kube-system pods found
	I0314 19:30:37.549947  992563 system_pods.go:89] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.549955  992563 system_pods.go:89] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.549962  992563 system_pods.go:89] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.549969  992563 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.549977  992563 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.549982  992563 system_pods.go:89] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.549987  992563 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.549998  992563 system_pods.go:89] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.550007  992563 system_pods.go:89] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.550022  992563 system_pods.go:126] duration metric: took 206.393584ms to wait for k8s-apps to be running ...
	I0314 19:30:37.550039  992563 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:37.550098  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:37.568337  992563 system_svc.go:56] duration metric: took 18.290339ms WaitForService to wait for kubelet
	I0314 19:30:37.568369  992563 kubeadm.go:576] duration metric: took 3.765742034s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:37.568396  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:37.747892  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:37.747931  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:37.747945  992563 node_conditions.go:105] duration metric: took 179.543321ms to run NodePressure ...
	I0314 19:30:37.747959  992563 start.go:240] waiting for startup goroutines ...
	I0314 19:30:37.747969  992563 start.go:245] waiting for cluster config update ...
	I0314 19:30:37.747984  992563 start.go:254] writing updated cluster config ...
	I0314 19:30:37.748310  992563 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:37.800491  992563 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:30:37.802410  992563 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-440341" cluster and "default" namespace by default
	I0314 19:31:02.414037  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:31:02.414153  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:31:02.415801  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:02.415891  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:02.415997  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:02.416110  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:02.416236  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:02.416324  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:02.418205  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:02.418304  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:02.418377  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:02.418455  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:02.418519  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:02.418629  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:02.418704  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:02.418793  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:02.418892  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:02.419018  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:02.419129  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:02.419184  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:02.419270  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:02.419347  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:02.419421  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:02.419528  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:02.419624  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:02.419808  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:02.419914  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:02.419951  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:02.420007  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:02.421520  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:02.421603  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:02.421669  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:02.421753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:02.421844  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:02.422023  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:02.422092  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:02.422167  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422353  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422458  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422731  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422812  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422970  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423032  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423228  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423333  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423479  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423488  992344 kubeadm.go:309] 
	I0314 19:31:02.423519  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:31:02.423552  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:31:02.423558  992344 kubeadm.go:309] 
	I0314 19:31:02.423601  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:31:02.423643  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:31:02.423770  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:31:02.423780  992344 kubeadm.go:309] 
	I0314 19:31:02.423912  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:31:02.423949  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:31:02.424001  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:31:02.424012  992344 kubeadm.go:309] 
	I0314 19:31:02.424141  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:31:02.424269  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:31:02.424280  992344 kubeadm.go:309] 
	I0314 19:31:02.424405  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:31:02.424481  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:31:02.424542  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:31:02.424606  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:31:02.424638  992344 kubeadm.go:309] 
	W0314 19:31:02.424800  992344 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 19:31:02.424887  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:31:03.827325  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.402406647s)
	I0314 19:31:03.827421  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:31:03.845125  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:31:03.856796  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:31:03.856821  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:31:03.856875  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:31:03.868304  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:31:03.868359  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:31:03.879608  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:31:03.891002  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:31:03.891068  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:31:03.902543  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.913159  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:31:03.913212  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.926194  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:31:03.937276  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:31:03.937344  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:31:03.949719  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:31:04.026772  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:04.026841  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:04.195658  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:04.195816  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:04.195973  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:04.416776  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:04.418845  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:04.418937  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:04.419023  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:04.419125  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:04.419222  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:04.419321  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:04.419386  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:04.419869  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:04.420376  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:04.420786  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:04.421265  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:04.421447  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:04.421551  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:04.472916  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:04.572160  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:04.802131  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:04.892115  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:04.908810  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:04.910191  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:04.910266  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:05.076124  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:05.078423  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:05.078564  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:05.083626  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:05.083753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:05.084096  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:05.088164  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:45.090977  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:45.091099  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:45.091378  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:50.091571  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:50.091787  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:00.093031  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:00.093312  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:20.094443  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:20.094650  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096632  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:33:00.096929  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096948  992344 kubeadm.go:309] 
	I0314 19:33:00.096986  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:33:00.097021  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:33:00.097030  992344 kubeadm.go:309] 
	I0314 19:33:00.097059  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:33:00.097088  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:33:00.097203  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:33:00.097228  992344 kubeadm.go:309] 
	I0314 19:33:00.097345  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:33:00.097394  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:33:00.097451  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:33:00.097461  992344 kubeadm.go:309] 
	I0314 19:33:00.097572  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:33:00.097673  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:33:00.097685  992344 kubeadm.go:309] 
	I0314 19:33:00.097865  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:33:00.098003  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:33:00.098105  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:33:00.098202  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:33:00.098221  992344 kubeadm.go:309] 
	I0314 19:33:00.098939  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:33:00.099069  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:33:00.099160  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:33:00.099254  992344 kubeadm.go:393] duration metric: took 7m59.845612375s to StartCluster
	I0314 19:33:00.099339  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:33:00.099422  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:33:00.151833  992344 cri.go:89] found id: ""
	I0314 19:33:00.151861  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.151869  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:33:00.151876  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:33:00.151943  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:33:00.196473  992344 cri.go:89] found id: ""
	I0314 19:33:00.196508  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.196519  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:33:00.196526  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:33:00.196595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:33:00.233150  992344 cri.go:89] found id: ""
	I0314 19:33:00.233193  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.233207  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:33:00.233217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:33:00.233292  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:33:00.273142  992344 cri.go:89] found id: ""
	I0314 19:33:00.273183  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.273196  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:33:00.273205  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:33:00.273274  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:33:00.311472  992344 cri.go:89] found id: ""
	I0314 19:33:00.311510  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.311523  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:33:00.311544  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:33:00.311618  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:33:00.352110  992344 cri.go:89] found id: ""
	I0314 19:33:00.352138  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.352146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:33:00.352152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:33:00.352230  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:33:00.399016  992344 cri.go:89] found id: ""
	I0314 19:33:00.399050  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.399060  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:33:00.399068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:33:00.399140  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:33:00.436808  992344 cri.go:89] found id: ""
	I0314 19:33:00.436844  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.436857  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:33:00.436871  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:33:00.436889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:33:00.487696  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:33:00.487732  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:33:00.503591  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:33:00.503624  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:33:00.586980  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:33:00.587014  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:33:00.587033  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:33:00.697747  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:33:00.697805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 19:33:00.767728  992344 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 19:33:00.767799  992344 out.go:239] * 
	W0314 19:33:00.768013  992344 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.768052  992344 out.go:239] * 
	W0314 19:33:00.769333  992344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:33:00.772897  992344 out.go:177] 
	W0314 19:33:00.774102  992344 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.774165  992344 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 19:33:00.774200  992344 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 19:33:00.775839  992344 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.545803804Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710444782545768464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=87ddc3a5-9344-400f-931b-b73b6f3e9537 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.546455023Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=dde67c82-d1ff-496c-b2c6-3b41d5ef1146 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.546533066Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=dde67c82-d1ff-496c-b2c6-3b41d5ef1146 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.546571884Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=dde67c82-d1ff-496c-b2c6-3b41d5ef1146 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.582297117Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d411fb77-234d-48b3-bd6d-e8eaa3d71bb9 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.582391724Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d411fb77-234d-48b3-bd6d-e8eaa3d71bb9 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.589347702Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6bab0366-e155-44fa-817d-1e1a6c264255 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.589811682Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710444782589778748,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6bab0366-e155-44fa-817d-1e1a6c264255 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.590667150Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60da09db-4a1c-42fb-9bcd-acb64b440464 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.590730371Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60da09db-4a1c-42fb-9bcd-acb64b440464 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.590785843Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=60da09db-4a1c-42fb-9bcd-acb64b440464 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.628142865Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=85523431-5887-4486-a698-ea847c5ebe00 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.628239793Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=85523431-5887-4486-a698-ea847c5ebe00 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.629783918Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9177acb2-f139-493b-ac41-ab05aa73683b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.630314819Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710444782630229463,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9177acb2-f139-493b-ac41-ab05aa73683b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.630868340Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fcbf1c3-7bd1-408d-963d-be7325fd3444 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.630944096Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fcbf1c3-7bd1-408d-963d-be7325fd3444 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.630978485Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=0fcbf1c3-7bd1-408d-963d-be7325fd3444 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.665383885Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=36234576-5276-4337-ba4a-cd839d061389 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.665482214Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=36234576-5276-4337-ba4a-cd839d061389 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.666935906Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=b992ffdf-15c7-464c-bbdb-898c5037fc74 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.667451857Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710444782667428472,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=b992ffdf-15c7-464c-bbdb-898c5037fc74 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.668150128Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=cb8e1232-c54b-4759-9eec-8bf75de85a9b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.668253176Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=cb8e1232-c54b-4759-9eec-8bf75de85a9b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:33:02 old-k8s-version-968094 crio[643]: time="2024-03-14 19:33:02.668310484Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=cb8e1232-c54b-4759-9eec-8bf75de85a9b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar14 19:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056380] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043046] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.730295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.383233] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.688071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.628591] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.061195] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075909] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.206378] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.171886] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.322045] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +7.118781] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[  +0.064097] kauditd_printk_skb: 130 callbacks suppressed
	[Mar14 19:25] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +8.646452] kauditd_printk_skb: 46 callbacks suppressed
	[Mar14 19:29] systemd-fstab-generator[4997]: Ignoring "noauto" option for root device
	[Mar14 19:31] systemd-fstab-generator[5276]: Ignoring "noauto" option for root device
	[  +0.079625] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:33:02 up 8 min,  0 users,  load average: 0.07, 0.12, 0.07
	Linux old-k8s-version-968094 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000c4bf58, 0x4f0ac20, 0xc000b79ef0, 0x1, 0xc0001000c0)
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:167 +0x149
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc00024e2a0, 0xc0001000c0)
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]: created by k8s.io/kubernetes/pkg/kubelet.NewMainKubelet
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/kubelet.go:439 +0x6849
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]: goroutine 136 [select]:
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*controlBuffer).get(0xc00093e4b0, 0x1, 0x0, 0x0, 0x0, 0x0)
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:395 +0x125
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.(*loopyWriter).run(0xc00028ad80, 0x0, 0x0)
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/controlbuf.go:513 +0x1d3
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]: k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client.func3(0xc0000f1880)
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:346 +0x7b
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]: created by k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport.newHTTP2Client
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5454]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/google.golang.org/grpc/internal/transport/http2_client.go:344 +0xefc
	Mar 14 19:33:00 old-k8s-version-968094 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 14 19:33:00 old-k8s-version-968094 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 14 19:33:00 old-k8s-version-968094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 20.
	Mar 14 19:33:00 old-k8s-version-968094 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 14 19:33:00 old-k8s-version-968094 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5512]: I0314 19:33:00.782120    5512 server.go:416] Version: v1.20.0
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5512]: I0314 19:33:00.782444    5512 server.go:837] Client rotation is on, will bootstrap in background
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5512]: I0314 19:33:00.784662    5512 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5512]: W0314 19:33:00.786408    5512 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 14 19:33:00 old-k8s-version-968094 kubelet[5512]: I0314 19:33:00.786736    5512 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-968094 -n old-k8s-version-968094
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 2 (289.52799ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-968094" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/SecondStart (750.57s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341: exit status 3 (3.167339299s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:20:57.368569  992453 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.88:22: connect: no route to host
	E0314 19:20:57.368595  992453 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.88:22: connect: no route to host

                                                
                                                
** /stderr **
start_stop_delete_test.go:239: status error: exit status 3 (may be ok)
start_stop_delete_test.go:241: expected post-stop host status to be -"Stopped"- but got *"Error"*
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-440341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
start_stop_delete_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-440341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4: exit status 11 (6.161429519s)

                                                
                                                
-- stdout --
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_ADDON_ENABLE_PAUSED: enabled failed: check paused: list paused: crictl list: NewSession: new client: new client: dial tcp 192.168.61.88:22: connect: no route to host
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_addons_a2d68fa011bbbda55500e636dff79fec124b29e3_0.log                  │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
start_stop_delete_test.go:248: failed to enable an addon post-stop. args "out/minikube-linux-amd64 addons enable dashboard -p default-k8s-diff-port-440341 --images=MetricsScraper=registry.k8s.io/echoserver:1.4": exit status 11
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341: exit status 3 (3.05439396s)

                                                
                                                
-- stdout --
	Error

                                                
                                                
-- /stdout --
** stderr ** 
	E0314 19:21:06.584550  992523 status.go:376] failed to get storage capacity of /var: NewSession: new client: new client: dial tcp 192.168.61.88:22: connect: no route to host
	E0314 19:21:06.584570  992523 status.go:249] status error: NewSession: new client: new client: dial tcp 192.168.61.88:22: connect: no route to host

                                                
                                                
** /stderr **
helpers_test.go:239: status error: exit status 3 (may be ok)
helpers_test.go:241: "default-k8s-diff-port-440341" host is not running, skipping log retrieval (state="Error")
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonAfterStop (12.38s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.36s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:274: ***** TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-992669 -n embed-certs-992669
start_stop_delete_test.go:274: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-14 19:38:55.757680118 +0000 UTC m=+5662.243539008
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992669 -n embed-certs-992669
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-992669 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-992669 logs -n 25: (2.212829068s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-578974 sudo                            | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-578974                                 | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:14 UTC |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-097195                           | kubernetes-upgrade-097195    | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:15 UTC |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-993602 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | disable-driver-mounts-993602                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:18 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-731976             | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-992669            | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC | 14 Mar 24 19:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-440341  | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC | 14 Mar 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-968094        | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-731976                  | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-992669                 | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-968094             | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-440341       | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:21 UTC | 14 Mar 24 19:30 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:21:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:21:06.641191  992563 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:21:06.641325  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641335  992563 out.go:304] Setting ErrFile to fd 2...
	I0314 19:21:06.641339  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641562  992563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:21:06.642133  992563 out.go:298] Setting JSON to false
	I0314 19:21:06.643097  992563 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":97419,"bootTime":1710346648,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:21:06.643154  992563 start.go:139] virtualization: kvm guest
	I0314 19:21:06.645619  992563 out.go:177] * [default-k8s-diff-port-440341] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:21:06.646948  992563 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:21:06.646951  992563 notify.go:220] Checking for updates...
	I0314 19:21:06.648183  992563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:21:06.649479  992563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:21:06.650646  992563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:21:06.651793  992563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:21:06.652871  992563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:21:06.654306  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:21:06.654679  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.654715  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.669822  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0314 19:21:06.670226  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.670730  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.670752  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.671113  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.671298  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.671562  992563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:21:06.671894  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.671955  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.686096  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
	I0314 19:21:06.686486  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.686930  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.686950  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.687304  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.687516  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.719775  992563 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 19:21:06.721100  992563 start.go:297] selected driver: kvm2
	I0314 19:21:06.721112  992563 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.721237  992563 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:21:06.722206  992563 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.722303  992563 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:21:06.737068  992563 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:21:06.737396  992563 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:21:06.737423  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:21:06.737430  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:21:06.737470  992563 start.go:340] cluster config:
	{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.737562  992563 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.739290  992563 out.go:177] * Starting "default-k8s-diff-port-440341" primary control-plane node in "default-k8s-diff-port-440341" cluster
	I0314 19:21:06.456441  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:06.740612  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:21:06.740639  992563 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 19:21:06.740649  992563 cache.go:56] Caching tarball of preloaded images
	I0314 19:21:06.740716  992563 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:21:06.740727  992563 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 19:21:06.740828  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:21:06.741044  992563 start.go:360] acquireMachinesLock for default-k8s-diff-port-440341: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:21:09.528474  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:15.608487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:18.680465  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:24.760483  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:27.832487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:33.912460  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:36.984446  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:43.064437  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:46.136461  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:52.216505  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:55.288457  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:01.368528  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:04.440444  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:10.520511  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:13.592559  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:19.672501  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:22.744517  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:28.824450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:31.896452  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:37.976513  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:41.048520  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:47.128498  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:50.200540  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:56.280558  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:59.352482  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:05.432488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:08.504481  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:14.584488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:17.656515  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:23.736418  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:26.808447  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:32.888521  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:35.960649  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:42.040524  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:45.112450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:51.192455  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:54.264715  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:00.344497  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:03.416432  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:06.421344  992056 start.go:364] duration metric: took 4m13.372196869s to acquireMachinesLock for "embed-certs-992669"
	I0314 19:24:06.421482  992056 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:06.421491  992056 fix.go:54] fixHost starting: 
	I0314 19:24:06.421996  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:06.422035  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:06.437799  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0314 19:24:06.438270  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:06.438847  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:24:06.438870  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:06.439255  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:06.439520  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:06.439648  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:24:06.441355  992056 fix.go:112] recreateIfNeeded on embed-certs-992669: state=Stopped err=<nil>
	I0314 19:24:06.441396  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	W0314 19:24:06.441578  992056 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:06.443265  992056 out.go:177] * Restarting existing kvm2 VM for "embed-certs-992669" ...
	I0314 19:24:06.444639  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Start
	I0314 19:24:06.444811  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring networks are active...
	I0314 19:24:06.445562  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network default is active
	I0314 19:24:06.445907  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network mk-embed-certs-992669 is active
	I0314 19:24:06.446291  992056 main.go:141] libmachine: (embed-certs-992669) Getting domain xml...
	I0314 19:24:06.446865  992056 main.go:141] libmachine: (embed-certs-992669) Creating domain...
	I0314 19:24:07.655936  992056 main.go:141] libmachine: (embed-certs-992669) Waiting to get IP...
	I0314 19:24:07.657162  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.657691  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.657795  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.657671  993021 retry.go:31] will retry after 279.188222ms: waiting for machine to come up
	I0314 19:24:07.938384  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.938890  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.938914  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.938842  993021 retry.go:31] will retry after 362.619543ms: waiting for machine to come up
	I0314 19:24:06.418272  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:06.418393  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.418709  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:24:06.418745  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.419028  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:24:06.421200  991880 machine.go:97] duration metric: took 4m37.410478688s to provisionDockerMachine
	I0314 19:24:06.421248  991880 fix.go:56] duration metric: took 4m37.431639776s for fixHost
	I0314 19:24:06.421257  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 4m37.431664509s
	W0314 19:24:06.421278  991880 start.go:713] error starting host: provision: host is not running
	W0314 19:24:06.421471  991880 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 19:24:06.421480  991880 start.go:728] Will try again in 5 seconds ...
	I0314 19:24:08.303564  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.304022  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.304049  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.303988  993021 retry.go:31] will retry after 299.406141ms: waiting for machine to come up
	I0314 19:24:08.605486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.605955  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.605983  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.605903  993021 retry.go:31] will retry after 438.174832ms: waiting for machine to come up
	I0314 19:24:09.045423  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.045943  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.045985  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.045874  993021 retry.go:31] will retry after 484.342881ms: waiting for machine to come up
	I0314 19:24:09.531525  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.531992  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.532032  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.531943  993021 retry.go:31] will retry after 680.030854ms: waiting for machine to come up
	I0314 19:24:10.213303  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:10.213760  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:10.213787  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:10.213714  993021 retry.go:31] will retry after 1.051377672s: waiting for machine to come up
	I0314 19:24:11.267112  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:11.267711  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:11.267736  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:11.267647  993021 retry.go:31] will retry after 1.45882013s: waiting for machine to come up
	I0314 19:24:12.729033  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:12.729529  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:12.729565  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:12.729476  993021 retry.go:31] will retry after 1.6586819s: waiting for machine to come up
	I0314 19:24:11.423018  991880 start.go:360] acquireMachinesLock for no-preload-731976: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:24:14.390304  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:14.390783  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:14.390813  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:14.390731  993021 retry.go:31] will retry after 1.484880543s: waiting for machine to come up
	I0314 19:24:15.877389  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:15.877877  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:15.877907  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:15.877817  993021 retry.go:31] will retry after 2.524223695s: waiting for machine to come up
	I0314 19:24:18.405110  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:18.405486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:18.405517  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:18.405433  993021 retry.go:31] will retry after 3.354970224s: waiting for machine to come up
	I0314 19:24:21.761886  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:21.762325  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:21.762374  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:21.762285  993021 retry.go:31] will retry after 3.996500899s: waiting for machine to come up
	I0314 19:24:27.129245  992344 start.go:364] duration metric: took 3m53.310661355s to acquireMachinesLock for "old-k8s-version-968094"
	I0314 19:24:27.129312  992344 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:27.129324  992344 fix.go:54] fixHost starting: 
	I0314 19:24:27.129726  992344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:27.129761  992344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:27.150444  992344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0314 19:24:27.150921  992344 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:27.151429  992344 main.go:141] libmachine: Using API Version  1
	I0314 19:24:27.151453  992344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:27.151859  992344 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:27.152058  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:27.152265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetState
	I0314 19:24:27.153847  992344 fix.go:112] recreateIfNeeded on old-k8s-version-968094: state=Stopped err=<nil>
	I0314 19:24:27.153876  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	W0314 19:24:27.154051  992344 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:27.156243  992344 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-968094" ...
	I0314 19:24:25.763430  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.763935  992056 main.go:141] libmachine: (embed-certs-992669) Found IP for machine: 192.168.50.213
	I0314 19:24:25.763962  992056 main.go:141] libmachine: (embed-certs-992669) Reserving static IP address...
	I0314 19:24:25.763974  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has current primary IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.764419  992056 main.go:141] libmachine: (embed-certs-992669) Reserved static IP address: 192.168.50.213
	I0314 19:24:25.764444  992056 main.go:141] libmachine: (embed-certs-992669) Waiting for SSH to be available...
	I0314 19:24:25.764467  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.764546  992056 main.go:141] libmachine: (embed-certs-992669) DBG | skip adding static IP to network mk-embed-certs-992669 - found existing host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"}
	I0314 19:24:25.764568  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Getting to WaitForSSH function...
	I0314 19:24:25.766675  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767018  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.767048  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767190  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH client type: external
	I0314 19:24:25.767237  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa (-rw-------)
	I0314 19:24:25.767278  992056 main.go:141] libmachine: (embed-certs-992669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:25.767299  992056 main.go:141] libmachine: (embed-certs-992669) DBG | About to run SSH command:
	I0314 19:24:25.767312  992056 main.go:141] libmachine: (embed-certs-992669) DBG | exit 0
	I0314 19:24:25.892385  992056 main.go:141] libmachine: (embed-certs-992669) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:25.892837  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetConfigRaw
	I0314 19:24:25.893525  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:25.895998  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896372  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.896411  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896708  992056 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/config.json ...
	I0314 19:24:25.896897  992056 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:25.896917  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:25.897155  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:25.899572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899856  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.899882  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:25.900241  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900594  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:25.900763  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:25.901166  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:25.901185  992056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:26.013286  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:26.013326  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013609  992056 buildroot.go:166] provisioning hostname "embed-certs-992669"
	I0314 19:24:26.013640  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013843  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.016614  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017006  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.017041  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.017397  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017596  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017746  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.017903  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.018131  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.018152  992056 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-992669 && echo "embed-certs-992669" | sudo tee /etc/hostname
	I0314 19:24:26.143977  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-992669
	
	I0314 19:24:26.144009  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.146661  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147021  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.147052  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147182  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.147387  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147542  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147677  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.147856  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.148037  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.148053  992056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-992669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-992669/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-992669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:26.266363  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:26.266400  992056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:26.266421  992056 buildroot.go:174] setting up certificates
	I0314 19:24:26.266430  992056 provision.go:84] configureAuth start
	I0314 19:24:26.266439  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.266755  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:26.269450  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269803  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.269833  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.272179  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272519  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.272572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272709  992056 provision.go:143] copyHostCerts
	I0314 19:24:26.272812  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:26.272823  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:26.272892  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:26.272992  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:26.273007  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:26.273034  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:26.273086  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:26.273093  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:26.273113  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:26.273199  992056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.embed-certs-992669 san=[127.0.0.1 192.168.50.213 embed-certs-992669 localhost minikube]
	I0314 19:24:26.424098  992056 provision.go:177] copyRemoteCerts
	I0314 19:24:26.424165  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:26.424193  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.426870  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427216  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.427293  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427367  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.427559  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.427745  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.427889  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.514935  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:26.542295  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0314 19:24:26.568557  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:24:26.595238  992056 provision.go:87] duration metric: took 328.794871ms to configureAuth
	I0314 19:24:26.595266  992056 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:26.595465  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:24:26.595587  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.598447  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598776  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.598810  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598958  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.599149  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599341  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599446  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.599576  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.599763  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.599784  992056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:26.883323  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:26.883363  992056 machine.go:97] duration metric: took 986.450882ms to provisionDockerMachine
	I0314 19:24:26.883378  992056 start.go:293] postStartSetup for "embed-certs-992669" (driver="kvm2")
	I0314 19:24:26.883393  992056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:26.883425  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:26.883799  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:26.883840  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.886707  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887088  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.887121  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887271  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.887471  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.887685  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.887842  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.972276  992056 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:26.977397  992056 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:26.977452  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:26.977557  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:26.977660  992056 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:26.977771  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:26.989997  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:27.015656  992056 start.go:296] duration metric: took 132.26294ms for postStartSetup
	I0314 19:24:27.015701  992056 fix.go:56] duration metric: took 20.594210437s for fixHost
	I0314 19:24:27.015723  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.018428  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018779  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.018820  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018934  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.019141  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019322  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019477  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.019663  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:27.019904  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:27.019918  992056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:27.129041  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444267.099898940
	
	I0314 19:24:27.129062  992056 fix.go:216] guest clock: 1710444267.099898940
	I0314 19:24:27.129070  992056 fix.go:229] Guest: 2024-03-14 19:24:27.09989894 +0000 UTC Remote: 2024-03-14 19:24:27.015704928 +0000 UTC m=+274.119026995 (delta=84.194012ms)
	I0314 19:24:27.129129  992056 fix.go:200] guest clock delta is within tolerance: 84.194012ms
	I0314 19:24:27.129134  992056 start.go:83] releasing machines lock for "embed-certs-992669", held for 20.707742604s
	I0314 19:24:27.129165  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.129445  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:27.132300  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132666  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.132696  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132891  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133513  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133729  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133832  992056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:27.133885  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.133989  992056 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:27.134020  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.136789  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137077  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137149  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137173  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137340  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.137694  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.137732  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137870  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.138017  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.138177  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.138423  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.241866  992056 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:27.248597  992056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:27.398034  992056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:27.404793  992056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:27.404866  992056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:27.425321  992056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:27.425347  992056 start.go:494] detecting cgroup driver to use...
	I0314 19:24:27.425441  992056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:27.446847  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:27.463193  992056 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:27.463248  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:27.477995  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:27.494158  992056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:27.626812  992056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:27.788432  992056 docker.go:233] disabling docker service ...
	I0314 19:24:27.788504  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:27.805552  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:27.820563  992056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:27.961941  992056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:28.083364  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:28.099491  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:28.121026  992056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:24:28.121100  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.133361  992056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:28.133445  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.145489  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.158112  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.171221  992056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:28.184604  992056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:28.196001  992056 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:28.196052  992056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:28.212800  992056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:28.225099  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:28.353741  992056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:28.497018  992056 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:28.497123  992056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:28.502406  992056 start.go:562] Will wait 60s for crictl version
	I0314 19:24:28.502464  992056 ssh_runner.go:195] Run: which crictl
	I0314 19:24:28.506848  992056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:28.546552  992056 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:28.546640  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.580646  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.613244  992056 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:24:27.157735  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .Start
	I0314 19:24:27.157923  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring networks are active...
	I0314 19:24:27.158602  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network default is active
	I0314 19:24:27.158940  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network mk-old-k8s-version-968094 is active
	I0314 19:24:27.159464  992344 main.go:141] libmachine: (old-k8s-version-968094) Getting domain xml...
	I0314 19:24:27.160230  992344 main.go:141] libmachine: (old-k8s-version-968094) Creating domain...
	I0314 19:24:28.397890  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting to get IP...
	I0314 19:24:28.398964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.399389  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.399455  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.399366  993151 retry.go:31] will retry after 254.808358ms: waiting for machine to come up
	I0314 19:24:28.655922  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.656383  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.656414  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.656329  993151 retry.go:31] will retry after 305.278558ms: waiting for machine to come up
	I0314 19:24:28.614866  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:28.618114  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618550  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:28.618595  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618875  992056 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:28.623905  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:28.637729  992056 kubeadm.go:877] updating cluster {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:28.637900  992056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:24:28.637976  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:28.679943  992056 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:24:28.680020  992056 ssh_runner.go:195] Run: which lz4
	I0314 19:24:28.684879  992056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:28.689966  992056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:28.690002  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:24:30.647436  992056 crio.go:444] duration metric: took 1.962590984s to copy over tarball
	I0314 19:24:30.647522  992056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:28.963796  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.964329  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.964360  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.964283  993151 retry.go:31] will retry after 405.241077ms: waiting for machine to come up
	I0314 19:24:29.371107  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.371677  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.371724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.371634  993151 retry.go:31] will retry after 392.618577ms: waiting for machine to come up
	I0314 19:24:29.766406  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.766893  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.766916  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.766848  993151 retry.go:31] will retry after 540.221203ms: waiting for machine to come up
	I0314 19:24:30.308703  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:30.309134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:30.309165  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:30.309075  993151 retry.go:31] will retry after 919.467685ms: waiting for machine to come up
	I0314 19:24:31.230536  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:31.231022  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:31.231055  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:31.230955  993151 retry.go:31] will retry after 1.096403831s: waiting for machine to come up
	I0314 19:24:32.329625  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:32.330123  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:32.330150  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:32.330079  993151 retry.go:31] will retry after 959.221478ms: waiting for machine to come up
	I0314 19:24:33.291448  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:33.291863  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:33.291896  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:33.291811  993151 retry.go:31] will retry after 1.719262878s: waiting for machine to come up
	I0314 19:24:33.418411  992056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.770860454s)
	I0314 19:24:33.418444  992056 crio.go:451] duration metric: took 2.770963996s to extract the tarball
	I0314 19:24:33.418458  992056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:33.461358  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:33.512360  992056 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:24:33.512392  992056 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:24:33.512403  992056 kubeadm.go:928] updating node { 192.168.50.213 8443 v1.28.4 crio true true} ...
	I0314 19:24:33.512647  992056 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-992669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:33.512740  992056 ssh_runner.go:195] Run: crio config
	I0314 19:24:33.572013  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:33.572042  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:33.572058  992056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:33.572089  992056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-992669 NodeName:embed-certs-992669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:24:33.572310  992056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-992669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:33.572391  992056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:24:33.583442  992056 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:33.583514  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:33.593833  992056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0314 19:24:33.611517  992056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:33.630287  992056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0314 19:24:33.649961  992056 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:33.654803  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:33.669018  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:33.787097  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:33.806023  992056 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669 for IP: 192.168.50.213
	I0314 19:24:33.806049  992056 certs.go:194] generating shared ca certs ...
	I0314 19:24:33.806076  992056 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:33.806256  992056 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:33.806310  992056 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:33.806325  992056 certs.go:256] generating profile certs ...
	I0314 19:24:33.806434  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/client.key
	I0314 19:24:33.806536  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key.c0728cf7
	I0314 19:24:33.806597  992056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key
	I0314 19:24:33.806759  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:33.806801  992056 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:33.806815  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:33.806850  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:33.806890  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:33.806919  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:33.806982  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:33.807845  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:33.856253  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:33.912784  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:33.954957  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:33.993293  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 19:24:34.037089  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0314 19:24:34.064883  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:34.091958  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:34.118801  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:24:34.145200  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:24:34.177627  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:34.205768  992056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:24:34.228516  992056 ssh_runner.go:195] Run: openssl version
	I0314 19:24:34.236753  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:24:34.251464  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257801  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257854  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.264945  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:24:34.277068  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:24:34.289085  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294602  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294670  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.301147  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:24:34.313131  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:24:34.324658  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329681  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329741  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.336033  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:24:34.347545  992056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:24:34.352395  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:24:34.358770  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:24:34.364979  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:24:34.371983  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:24:34.378320  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:24:34.385155  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:24:34.392023  992056 kubeadm.go:391] StartCluster: {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:24:34.392123  992056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:24:34.392163  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.431071  992056 cri.go:89] found id: ""
	I0314 19:24:34.431146  992056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:24:34.442517  992056 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:24:34.442537  992056 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:24:34.442543  992056 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:24:34.442591  992056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:24:34.452897  992056 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:24:34.453878  992056 kubeconfig.go:125] found "embed-certs-992669" server: "https://192.168.50.213:8443"
	I0314 19:24:34.456056  992056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:24:34.466222  992056 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.213
	I0314 19:24:34.466280  992056 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:24:34.466297  992056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:24:34.466350  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.514040  992056 cri.go:89] found id: ""
	I0314 19:24:34.514150  992056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:24:34.532904  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:24:34.543553  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:24:34.543572  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:24:34.543621  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:24:34.553476  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:24:34.553537  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:24:34.564032  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:24:34.573782  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:24:34.573880  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:24:34.584510  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.595906  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:24:34.595970  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.610866  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:24:34.623752  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:24:34.623808  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:24:34.634364  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:24:34.645735  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:34.774124  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.518494  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.777109  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.873101  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.991242  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:24:35.991340  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.491712  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.991589  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:37.035324  992056 api_server.go:72] duration metric: took 1.044079871s to wait for apiserver process to appear ...
	I0314 19:24:37.035360  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:24:37.035414  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:37.036045  992056 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0314 19:24:37.535727  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:35.013374  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:35.013750  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:35.013781  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:35.013702  993151 retry.go:31] will retry after 1.413824554s: waiting for machine to come up
	I0314 19:24:36.429118  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:36.429704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:36.429738  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:36.429643  993151 retry.go:31] will retry after 2.349477476s: waiting for machine to come up
	I0314 19:24:40.106309  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.106348  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.106381  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.155310  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.155352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.535833  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.544840  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:40.544869  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.036483  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.049323  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:41.049352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.536465  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.542411  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:24:41.550034  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:24:41.550066  992056 api_server.go:131] duration metric: took 4.514697227s to wait for apiserver health ...
	I0314 19:24:41.550078  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:41.550086  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:41.551967  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:24:41.553380  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:24:41.564892  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:24:41.585838  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:24:41.600993  992056 system_pods.go:59] 8 kube-system pods found
	I0314 19:24:41.601025  992056 system_pods.go:61] "coredns-5dd5756b68-jpsr6" [80728635-786f-442e-80be-811e3292128b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:24:41.601037  992056 system_pods.go:61] "etcd-embed-certs-992669" [4bd7ff48-fe02-4b55-b1f5-cf195efae581] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:24:41.601043  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [2a5f81e9-4943-47d9-a705-e91b802bd506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:24:41.601052  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [50904b48-cbc6-494c-8ed0-ef558f20513c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:24:41.601057  992056 system_pods.go:61] "kube-proxy-nsgs6" [d26d8d3f-04ca-4f68-9016-48552bcdc2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 19:24:41.601062  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [bf535a02-78be-44b0-8ebb-338754867930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:24:41.601067  992056 system_pods.go:61] "metrics-server-57f55c9bc5-w8cj6" [398e104c-24c4-45db-94fb-44188cfa85a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:24:41.601071  992056 system_pods.go:61] "storage-provisioner" [66abcc06-9867-4617-afc1-3fa370940f80] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:24:41.601077  992056 system_pods.go:74] duration metric: took 15.215121ms to wait for pod list to return data ...
	I0314 19:24:41.601085  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:24:41.606110  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:24:41.606135  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:24:41.606146  992056 node_conditions.go:105] duration metric: took 5.056699ms to run NodePressure ...
	I0314 19:24:41.606163  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:41.842508  992056 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850325  992056 kubeadm.go:733] kubelet initialised
	I0314 19:24:41.850344  992056 kubeadm.go:734] duration metric: took 7.804586ms waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850352  992056 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:24:41.857067  992056 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.861933  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861954  992056 pod_ready.go:81] duration metric: took 4.862588ms for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.861963  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861971  992056 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.869015  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869044  992056 pod_ready.go:81] duration metric: took 7.059854ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.869055  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869063  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.877475  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877499  992056 pod_ready.go:81] duration metric: took 8.426466ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.877517  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877525  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.989806  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989836  992056 pod_ready.go:81] duration metric: took 112.302268ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.989846  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989852  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390883  992056 pod_ready.go:92] pod "kube-proxy-nsgs6" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:42.390916  992056 pod_ready.go:81] duration metric: took 401.05393ms for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390929  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:38.781555  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:38.782105  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:38.782134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:38.782060  993151 retry.go:31] will retry after 3.062702235s: waiting for machine to come up
	I0314 19:24:41.846373  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:41.846889  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:41.846928  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:41.846822  993151 retry.go:31] will retry after 3.245094913s: waiting for machine to come up
	I0314 19:24:44.397857  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:46.400091  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:45.093425  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:45.093821  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:45.093848  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:45.093766  993151 retry.go:31] will retry after 4.695140566s: waiting for machine to come up
	I0314 19:24:51.181742  992563 start.go:364] duration metric: took 3m44.440656871s to acquireMachinesLock for "default-k8s-diff-port-440341"
	I0314 19:24:51.181827  992563 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:51.181839  992563 fix.go:54] fixHost starting: 
	I0314 19:24:51.182279  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:51.182325  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:51.202636  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0314 19:24:51.203153  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:51.203703  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:24:51.203732  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:51.204197  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:51.204404  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:24:51.204622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:24:51.206147  992563 fix.go:112] recreateIfNeeded on default-k8s-diff-port-440341: state=Stopped err=<nil>
	I0314 19:24:51.206184  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	W0314 19:24:51.206365  992563 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:51.208359  992563 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-440341" ...
	I0314 19:24:51.209719  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Start
	I0314 19:24:51.209912  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring networks are active...
	I0314 19:24:51.210618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network default is active
	I0314 19:24:51.210996  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network mk-default-k8s-diff-port-440341 is active
	I0314 19:24:51.211386  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Getting domain xml...
	I0314 19:24:51.212126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Creating domain...
	I0314 19:24:49.791977  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792478  992344 main.go:141] libmachine: (old-k8s-version-968094) Found IP for machine: 192.168.72.211
	I0314 19:24:49.792509  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has current primary IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792519  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserving static IP address...
	I0314 19:24:49.792964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.792995  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserved static IP address: 192.168.72.211
	I0314 19:24:49.793028  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | skip adding static IP to network mk-old-k8s-version-968094 - found existing host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"}
	I0314 19:24:49.793049  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Getting to WaitForSSH function...
	I0314 19:24:49.793060  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting for SSH to be available...
	I0314 19:24:49.795809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796119  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.796155  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796340  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH client type: external
	I0314 19:24:49.796365  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa (-rw-------)
	I0314 19:24:49.796399  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:49.796418  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | About to run SSH command:
	I0314 19:24:49.796437  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | exit 0
	I0314 19:24:49.928364  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:49.928849  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:24:49.929565  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:49.932065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932543  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.932575  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932818  992344 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json ...
	I0314 19:24:49.933027  992344 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:49.933049  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:49.933280  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:49.935870  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936260  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.936292  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936447  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:49.936649  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936821  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936940  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:49.937112  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:49.937318  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:49.937331  992344 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:50.053144  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:50.053184  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053461  992344 buildroot.go:166] provisioning hostname "old-k8s-version-968094"
	I0314 19:24:50.053495  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053715  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.056663  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057034  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.057061  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.057486  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057647  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057775  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.057990  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.058167  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.058181  992344 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-968094 && echo "old-k8s-version-968094" | sudo tee /etc/hostname
	I0314 19:24:50.190002  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-968094
	
	I0314 19:24:50.190030  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.192892  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193306  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.193343  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.193825  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194002  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194128  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.194298  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.194472  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.194493  992344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-968094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-968094/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-968094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:50.322939  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:50.322975  992344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:50.323003  992344 buildroot.go:174] setting up certificates
	I0314 19:24:50.323016  992344 provision.go:84] configureAuth start
	I0314 19:24:50.323026  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.323344  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:50.326376  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.326798  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.326827  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.327082  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.329704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.329994  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.330026  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.330131  992344 provision.go:143] copyHostCerts
	I0314 19:24:50.330206  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:50.330223  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:50.330299  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:50.330426  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:50.330435  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:50.330472  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:50.330549  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:50.330560  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:50.330584  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:50.330649  992344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-968094 san=[127.0.0.1 192.168.72.211 localhost minikube old-k8s-version-968094]
	I0314 19:24:50.471374  992344 provision.go:177] copyRemoteCerts
	I0314 19:24:50.471438  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:50.471469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.474223  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474570  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.474608  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474773  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.474969  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.475149  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.475261  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:50.563859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:24:50.593259  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:50.624146  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 19:24:50.651113  992344 provision.go:87] duration metric: took 328.081801ms to configureAuth
	I0314 19:24:50.651158  992344 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:50.651348  992344 config.go:182] Loaded profile config "old-k8s-version-968094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:24:50.651445  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.654716  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.655096  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655328  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.655552  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655730  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655870  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.656012  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.656191  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.656223  992344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:50.925456  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:50.925492  992344 machine.go:97] duration metric: took 992.449429ms to provisionDockerMachine
	I0314 19:24:50.925508  992344 start.go:293] postStartSetup for "old-k8s-version-968094" (driver="kvm2")
	I0314 19:24:50.925518  992344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:50.925535  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:50.925909  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:50.925957  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.928724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929100  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.929124  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929292  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.929469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.929606  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.929718  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.020664  992344 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:51.025418  992344 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:51.025449  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:51.025530  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:51.025642  992344 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:51.025732  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:51.036808  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:51.062597  992344 start.go:296] duration metric: took 137.076655ms for postStartSetup
	I0314 19:24:51.062641  992344 fix.go:56] duration metric: took 23.933315476s for fixHost
	I0314 19:24:51.062667  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.065408  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.065766  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.065809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.066008  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.066241  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066426  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.066751  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:51.066923  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:51.066934  992344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:51.181564  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444291.127685902
	
	I0314 19:24:51.181593  992344 fix.go:216] guest clock: 1710444291.127685902
	I0314 19:24:51.181604  992344 fix.go:229] Guest: 2024-03-14 19:24:51.127685902 +0000 UTC Remote: 2024-03-14 19:24:51.062645814 +0000 UTC m=+257.398231189 (delta=65.040088ms)
	I0314 19:24:51.181630  992344 fix.go:200] guest clock delta is within tolerance: 65.040088ms
	I0314 19:24:51.181636  992344 start.go:83] releasing machines lock for "old-k8s-version-968094", held for 24.052354261s
	I0314 19:24:51.181662  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.181979  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:51.185086  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185444  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.185482  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185683  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186150  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186369  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186475  992344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:51.186530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.186600  992344 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:51.186628  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.189328  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189665  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189739  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.189769  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189909  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190069  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.190091  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.190096  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190278  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190372  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190419  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.190530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190693  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190870  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.273691  992344 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:51.304581  992344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:51.462596  992344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:51.469505  992344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:51.469580  992344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:51.488042  992344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:51.488064  992344 start.go:494] detecting cgroup driver to use...
	I0314 19:24:51.488127  992344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:51.506331  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:51.521263  992344 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:51.521310  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:51.535346  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:51.554784  992344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:51.695072  992344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:51.861752  992344 docker.go:233] disabling docker service ...
	I0314 19:24:51.861822  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:51.886279  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:51.908899  992344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:52.059911  992344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:52.216861  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:52.236554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:52.262549  992344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 19:24:52.262629  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.277311  992344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:52.277405  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.292485  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.307327  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.323517  992344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:52.337431  992344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:52.350647  992344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:52.350744  992344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:52.371679  992344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:52.384810  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:52.540285  992344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:52.710717  992344 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:52.710812  992344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:52.716025  992344 start.go:562] Will wait 60s for crictl version
	I0314 19:24:52.716079  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:52.720670  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:52.760376  992344 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:52.760453  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.795912  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.829365  992344 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 19:24:48.899626  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:50.899777  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:52.830745  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:52.834322  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.834813  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:52.834846  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.835148  992344 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:52.840664  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:52.855935  992344 kubeadm.go:877] updating cluster {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:52.856085  992344 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:24:52.856143  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:52.917316  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:52.917384  992344 ssh_runner.go:195] Run: which lz4
	I0314 19:24:52.923732  992344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:52.929018  992344 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:52.929045  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 19:24:52.555382  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting to get IP...
	I0314 19:24:52.556296  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556767  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556831  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.556746  993275 retry.go:31] will retry after 250.179074ms: waiting for machine to come up
	I0314 19:24:52.808339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.808989  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.809024  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.808935  993275 retry.go:31] will retry after 257.317639ms: waiting for machine to come up
	I0314 19:24:53.068134  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068762  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.068737  993275 retry.go:31] will retry after 427.477171ms: waiting for machine to come up
	I0314 19:24:53.498274  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498751  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498783  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.498710  993275 retry.go:31] will retry after 414.04038ms: waiting for machine to come up
	I0314 19:24:53.914418  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.914970  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.915003  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.914922  993275 retry.go:31] will retry after 698.808984ms: waiting for machine to come up
	I0314 19:24:54.616167  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616671  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616733  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:54.616625  993275 retry.go:31] will retry after 627.573493ms: waiting for machine to come up
	I0314 19:24:55.245579  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246152  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246193  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:55.246077  993275 retry.go:31] will retry after 827.444645ms: waiting for machine to come up
	I0314 19:24:56.075132  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075586  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:56.075577  993275 retry.go:31] will retry after 1.317575549s: waiting for machine to come up
	I0314 19:24:53.400660  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:55.906584  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:56.899301  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:56.899342  992056 pod_ready.go:81] duration metric: took 14.508394033s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:56.899353  992056 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:55.007168  992344 crio.go:444] duration metric: took 2.08347164s to copy over tarball
	I0314 19:24:55.007258  992344 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:58.484792  992344 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.477465904s)
	I0314 19:24:58.484844  992344 crio.go:451] duration metric: took 3.47764437s to extract the tarball
	I0314 19:24:58.484855  992344 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:58.531628  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:58.586436  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:58.586467  992344 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.586644  992344 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.586686  992344 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.586732  992344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.586795  992344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588701  992344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.588708  992344 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 19:24:58.588712  992344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.588743  992344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.588773  992344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.588717  992344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:57.395510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.395966  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.396012  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:57.395926  993275 retry.go:31] will retry after 1.349742787s: waiting for machine to come up
	I0314 19:24:58.747273  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747764  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:58.747711  993275 retry.go:31] will retry after 1.715984886s: waiting for machine to come up
	I0314 19:25:00.465630  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466197  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:00.466159  993275 retry.go:31] will retry after 2.291989797s: waiting for machine to come up
	I0314 19:24:58.949160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:01.407335  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:58.745061  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.748854  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.755753  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.757595  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.776672  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.785641  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.803868  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 19:24:58.878866  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:59.049142  992344 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 19:24:59.049192  992344 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 19:24:59.049238  992344 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 19:24:59.049245  992344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 19:24:59.049206  992344 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.049275  992344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.049297  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058360  992344 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 19:24:59.058394  992344 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.058429  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058471  992344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 19:24:59.058508  992344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.058530  992344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 19:24:59.058550  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058560  992344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.058580  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058506  992344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 19:24:59.058620  992344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.058668  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.179879  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 19:24:59.179903  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.179964  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.180018  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.180048  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.180057  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.180158  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.353654  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 19:24:59.353726  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 19:24:59.353834  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 19:24:59.353886  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 19:24:59.353951  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 19:24:59.353992  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 19:24:59.356778  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 19:24:59.356828  992344 cache_images.go:92] duration metric: took 770.342451ms to LoadCachedImages
	W0314 19:24:59.356913  992344 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0314 19:24:59.356940  992344 kubeadm.go:928] updating node { 192.168.72.211 8443 v1.20.0 crio true true} ...
	I0314 19:24:59.357079  992344 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-968094 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:59.357158  992344 ssh_runner.go:195] Run: crio config
	I0314 19:24:59.412340  992344 cni.go:84] Creating CNI manager for ""
	I0314 19:24:59.412369  992344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:59.412383  992344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:59.412401  992344 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-968094 NodeName:old-k8s-version-968094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 19:24:59.412538  992344 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-968094"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:59.412599  992344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 19:24:59.424508  992344 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:59.424568  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:59.435744  992344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0314 19:24:59.456291  992344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:59.476542  992344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0314 19:24:59.496114  992344 ssh_runner.go:195] Run: grep 192.168.72.211	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:59.500824  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:59.515178  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:59.658035  992344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:59.677735  992344 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094 for IP: 192.168.72.211
	I0314 19:24:59.677764  992344 certs.go:194] generating shared ca certs ...
	I0314 19:24:59.677788  992344 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:59.677986  992344 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:59.678055  992344 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:59.678073  992344 certs.go:256] generating profile certs ...
	I0314 19:24:59.678209  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key
	I0314 19:24:59.678288  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff
	I0314 19:24:59.678358  992344 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key
	I0314 19:24:59.678538  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:59.678589  992344 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:59.678602  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:59.678684  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:59.678751  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:59.678787  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:59.678858  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:59.679859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:59.720965  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:59.758643  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:59.791205  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:59.832034  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 19:24:59.864634  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:24:59.912167  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:59.941168  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:59.969896  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:59.998999  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:00.029688  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:00.062406  992344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:00.083876  992344 ssh_runner.go:195] Run: openssl version
	I0314 19:25:00.091083  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:00.104196  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110057  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110152  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.117863  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:00.130915  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:00.144184  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149849  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149905  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.156267  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:00.168884  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:00.181228  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186741  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186815  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.193408  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:00.206565  992344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:00.211955  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:00.218803  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:00.226004  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:00.233071  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:00.239998  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:00.246935  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:00.253650  992344 kubeadm.go:391] StartCluster: {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:00.253770  992344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:00.253810  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.296620  992344 cri.go:89] found id: ""
	I0314 19:25:00.296698  992344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:00.308438  992344 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:00.308468  992344 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:00.308474  992344 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:00.308525  992344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:00.319200  992344 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:00.320258  992344 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-968094" does not appear in /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:25:00.320949  992344 kubeconfig.go:62] /home/jenkins/minikube-integration/18384-942544/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-968094" cluster setting kubeconfig missing "old-k8s-version-968094" context setting]
	I0314 19:25:00.321954  992344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:00.323826  992344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:00.334959  992344 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.211
	I0314 19:25:00.334999  992344 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:00.335015  992344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:00.335094  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.382418  992344 cri.go:89] found id: ""
	I0314 19:25:00.382504  992344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:00.400714  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:00.411916  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:00.411941  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:00.412000  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:00.421737  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:00.421786  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:00.431760  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:00.441154  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:00.441196  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:00.450820  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.460234  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:00.460286  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.470870  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:00.480352  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:00.480410  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:00.490282  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:00.500774  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:00.627719  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.640607  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.012840431s)
	I0314 19:25:01.640641  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.916817  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.028420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.119081  992344 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:02.119190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.619675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.119328  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.620344  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.761090  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761739  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:02.761611  993275 retry.go:31] will retry after 3.350017146s: waiting for machine to come up
	I0314 19:25:06.113637  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114139  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:06.114067  993275 retry.go:31] will retry after 2.99017798s: waiting for machine to come up
	I0314 19:25:03.407892  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:05.907001  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:04.120088  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:04.619514  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.119530  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.119991  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.619382  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.119301  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.620072  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.119582  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.619828  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.105563  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106118  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106171  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:09.105987  993275 retry.go:31] will retry after 5.42931998s: waiting for machine to come up
	I0314 19:25:08.406736  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:10.906160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:09.119659  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.619483  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.119624  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.619745  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.120056  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.619647  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.120231  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.619400  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.120340  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.620046  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.061441  991880 start.go:364] duration metric: took 1m4.63836278s to acquireMachinesLock for "no-preload-731976"
	I0314 19:25:16.061504  991880 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:25:16.061513  991880 fix.go:54] fixHost starting: 
	I0314 19:25:16.061978  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:25:16.062021  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:25:16.079752  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0314 19:25:16.080283  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:25:16.080930  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:25:16.080964  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:25:16.081279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:25:16.081477  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:16.081630  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:25:16.083170  991880 fix.go:112] recreateIfNeeded on no-preload-731976: state=Stopped err=<nil>
	I0314 19:25:16.083196  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	W0314 19:25:16.083368  991880 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:25:16.085486  991880 out.go:177] * Restarting existing kvm2 VM for "no-preload-731976" ...
	I0314 19:25:14.539116  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has current primary IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Found IP for machine: 192.168.61.88
	I0314 19:25:14.539650  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserving static IP address...
	I0314 19:25:14.540057  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.540081  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserved static IP address: 192.168.61.88
	I0314 19:25:14.540105  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | skip adding static IP to network mk-default-k8s-diff-port-440341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"}
	I0314 19:25:14.540126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Getting to WaitForSSH function...
	I0314 19:25:14.540172  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for SSH to be available...
	I0314 19:25:14.542249  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542558  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.542594  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542722  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH client type: external
	I0314 19:25:14.542755  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa (-rw-------)
	I0314 19:25:14.542793  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:14.542810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | About to run SSH command:
	I0314 19:25:14.542841  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | exit 0
	I0314 19:25:14.668392  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:14.668820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetConfigRaw
	I0314 19:25:14.669583  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:14.672181  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672581  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.672622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672861  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:25:14.673049  992563 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:14.673069  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:14.673317  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.675826  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676173  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.676204  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.676547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676702  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.676969  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.677212  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.677229  992563 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:14.780979  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:14.781005  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781243  992563 buildroot.go:166] provisioning hostname "default-k8s-diff-port-440341"
	I0314 19:25:14.781272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781508  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.784454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.784868  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.784897  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.785044  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.785241  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785410  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.785731  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.786010  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.786038  992563 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-440341 && echo "default-k8s-diff-port-440341" | sudo tee /etc/hostname
	I0314 19:25:14.904629  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-440341
	
	I0314 19:25:14.904677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.907677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908043  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.908065  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908308  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.908510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908709  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908895  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.909075  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.909242  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.909260  992563 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-440341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-440341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-440341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:15.027592  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:15.027627  992563 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:15.027663  992563 buildroot.go:174] setting up certificates
	I0314 19:25:15.027676  992563 provision.go:84] configureAuth start
	I0314 19:25:15.027686  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:15.027992  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:15.031259  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031691  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.031723  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031839  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.034341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034690  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.034727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034882  992563 provision.go:143] copyHostCerts
	I0314 19:25:15.034957  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:15.034974  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:15.035032  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:15.035117  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:15.035126  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:15.035150  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:15.035219  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:15.035240  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:15.035276  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:15.035368  992563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-440341 san=[127.0.0.1 192.168.61.88 default-k8s-diff-port-440341 localhost minikube]
	I0314 19:25:15.366505  992563 provision.go:177] copyRemoteCerts
	I0314 19:25:15.366572  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:15.366601  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.369547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.369931  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.369968  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.370178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.370389  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.370559  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.370668  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.451879  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:15.479025  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 19:25:15.505498  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:15.531616  992563 provision.go:87] duration metric: took 503.926667ms to configureAuth
	I0314 19:25:15.531643  992563 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:15.531808  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:25:15.531887  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.534449  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534774  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.534805  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534957  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.535182  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535344  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535479  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.535660  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.535863  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.535895  992563 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:15.820304  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:15.820329  992563 machine.go:97] duration metric: took 1.147267075s to provisionDockerMachine
	I0314 19:25:15.820361  992563 start.go:293] postStartSetup for "default-k8s-diff-port-440341" (driver="kvm2")
	I0314 19:25:15.820373  992563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:15.820407  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:15.820799  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:15.820845  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.823575  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.823941  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.823987  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.824114  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.824357  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.824550  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.824671  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.908341  992563 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:15.913846  992563 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:15.913876  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:15.913955  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:15.914034  992563 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:15.914122  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:15.925105  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:15.954205  992563 start.go:296] duration metric: took 133.827027ms for postStartSetup
	I0314 19:25:15.954258  992563 fix.go:56] duration metric: took 24.772420326s for fixHost
	I0314 19:25:15.954282  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.957262  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957609  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.957635  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957844  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.958095  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.958685  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.958877  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.958890  992563 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:16.061284  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444316.007193080
	
	I0314 19:25:16.061311  992563 fix.go:216] guest clock: 1710444316.007193080
	I0314 19:25:16.061318  992563 fix.go:229] Guest: 2024-03-14 19:25:16.00719308 +0000 UTC Remote: 2024-03-14 19:25:15.954262263 +0000 UTC m=+249.360732976 (delta=52.930817ms)
	I0314 19:25:16.061337  992563 fix.go:200] guest clock delta is within tolerance: 52.930817ms
	I0314 19:25:16.061342  992563 start.go:83] releasing machines lock for "default-k8s-diff-port-440341", held for 24.879556185s
	I0314 19:25:16.061371  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.061696  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:16.064827  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065187  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.065222  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065419  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.065929  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066138  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066251  992563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:16.066313  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.066422  992563 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:16.066451  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.069082  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069202  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069488  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069518  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069624  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069659  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.069727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069881  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.069946  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.070091  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070106  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.070265  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.070283  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070420  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.149620  992563 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:16.178081  992563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:16.329236  992563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:16.337073  992563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:16.337165  992563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:16.364829  992563 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:16.364860  992563 start.go:494] detecting cgroup driver to use...
	I0314 19:25:16.364950  992563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:16.381277  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:16.396677  992563 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:16.396790  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:16.415438  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:16.434001  992563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:16.557750  992563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:16.705623  992563 docker.go:233] disabling docker service ...
	I0314 19:25:16.705722  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:16.724795  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:16.740336  992563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:16.886850  992563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:17.053349  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:17.069592  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:17.094552  992563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:17.094625  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.110947  992563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:17.111007  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.126320  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.146601  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.159826  992563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:17.173155  992563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:17.184494  992563 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:17.184558  992563 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:17.208695  992563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:17.227381  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:17.368355  992563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:17.520886  992563 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:17.520974  992563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:17.526580  992563 start.go:562] Will wait 60s for crictl version
	I0314 19:25:17.526628  992563 ssh_runner.go:195] Run: which crictl
	I0314 19:25:17.531219  992563 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:17.575983  992563 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:17.576094  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.609997  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.649005  992563 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:25:13.406397  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:15.407636  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:17.409791  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:14.119937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:14.619997  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.120018  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.620272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.119409  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.619421  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.120049  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.619392  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.120272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.619832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.086761  991880 main.go:141] libmachine: (no-preload-731976) Calling .Start
	I0314 19:25:16.086939  991880 main.go:141] libmachine: (no-preload-731976) Ensuring networks are active...
	I0314 19:25:16.087657  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network default is active
	I0314 19:25:16.088038  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network mk-no-preload-731976 is active
	I0314 19:25:16.088466  991880 main.go:141] libmachine: (no-preload-731976) Getting domain xml...
	I0314 19:25:16.089244  991880 main.go:141] libmachine: (no-preload-731976) Creating domain...
	I0314 19:25:17.372280  991880 main.go:141] libmachine: (no-preload-731976) Waiting to get IP...
	I0314 19:25:17.373197  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.373612  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.373682  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.373595  993471 retry.go:31] will retry after 247.546207ms: waiting for machine to come up
	I0314 19:25:17.622973  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.623491  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.623521  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.623426  993471 retry.go:31] will retry after 340.11253ms: waiting for machine to come up
	I0314 19:25:17.964912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.965367  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.965409  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.965326  993471 retry.go:31] will retry after 467.934923ms: waiting for machine to come up
	I0314 19:25:18.434872  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.435488  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.435532  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.435428  993471 retry.go:31] will retry after 407.906998ms: waiting for machine to come up
	I0314 19:25:18.845093  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.845593  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.845624  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.845538  993471 retry.go:31] will retry after 461.594471ms: waiting for machine to come up
	I0314 19:25:17.650252  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:17.653280  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:17.653706  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653907  992563 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:17.660311  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:17.676122  992563 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:17.676277  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:25:17.676348  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:17.718920  992563 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:25:17.718999  992563 ssh_runner.go:195] Run: which lz4
	I0314 19:25:17.724064  992563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:25:17.729236  992563 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:25:17.729268  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:25:19.779405  992563 crio.go:444] duration metric: took 2.055391829s to copy over tarball
	I0314 19:25:19.779494  992563 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:25:19.411247  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:21.911525  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:19.120147  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.619419  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.119333  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.620029  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.119402  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.620236  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.119692  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.120125  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.620104  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.309335  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.309861  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.309892  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.309812  993471 retry.go:31] will retry after 629.96532ms: waiting for machine to come up
	I0314 19:25:19.941554  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.942052  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.942086  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.942018  993471 retry.go:31] will retry after 1.025753706s: waiting for machine to come up
	I0314 19:25:20.969178  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:20.969734  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:20.969775  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:20.969671  993471 retry.go:31] will retry after 1.02702661s: waiting for machine to come up
	I0314 19:25:21.998485  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:21.999019  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:21.999054  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:21.998955  993471 retry.go:31] will retry after 1.463514327s: waiting for machine to come up
	I0314 19:25:23.464556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:23.465087  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:23.465123  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:23.465035  993471 retry.go:31] will retry after 2.155372334s: waiting for machine to come up
	I0314 19:25:22.861284  992563 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081750952s)
	I0314 19:25:22.861324  992563 crio.go:451] duration metric: took 3.081885026s to extract the tarball
	I0314 19:25:22.861335  992563 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:25:22.907763  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:22.962568  992563 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:25:22.962593  992563 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:25:22.962602  992563 kubeadm.go:928] updating node { 192.168.61.88 8444 v1.28.4 crio true true} ...
	I0314 19:25:22.962756  992563 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-440341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:22.962851  992563 ssh_runner.go:195] Run: crio config
	I0314 19:25:23.020057  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:23.020092  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:23.020109  992563 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:23.020150  992563 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.88 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-440341 NodeName:default-k8s-diff-port-440341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:23.020354  992563 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.88
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-440341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:23.020441  992563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:25:23.031259  992563 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:23.031351  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:23.041703  992563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0314 19:25:23.061055  992563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:25:23.084905  992563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0314 19:25:23.108282  992563 ssh_runner.go:195] Run: grep 192.168.61.88	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:23.114097  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:23.134147  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:23.261318  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:23.280454  992563 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341 for IP: 192.168.61.88
	I0314 19:25:23.280483  992563 certs.go:194] generating shared ca certs ...
	I0314 19:25:23.280506  992563 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:23.280675  992563 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:23.280739  992563 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:23.280753  992563 certs.go:256] generating profile certs ...
	I0314 19:25:23.280872  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.key
	I0314 19:25:23.280971  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key.a3c32cf7
	I0314 19:25:23.281038  992563 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key
	I0314 19:25:23.281177  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:23.281219  992563 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:23.281232  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:23.281268  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:23.281300  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:23.281333  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:23.281389  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:23.282304  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:23.351284  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:23.402835  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:23.435934  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:23.467188  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 19:25:23.499760  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:23.528544  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:23.556740  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:23.584404  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:23.615693  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:23.643349  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:23.671793  992563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:23.692766  992563 ssh_runner.go:195] Run: openssl version
	I0314 19:25:23.699459  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:23.711735  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717022  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717078  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.723658  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:23.735141  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:23.746833  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753783  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753855  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.760817  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:23.772826  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:23.784241  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789107  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789170  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.795406  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:23.806969  992563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:23.811875  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:23.818337  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:23.826885  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:23.835278  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:23.843419  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:23.851515  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:23.860074  992563 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:23.860169  992563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:23.860241  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.902985  992563 cri.go:89] found id: ""
	I0314 19:25:23.903065  992563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:23.915686  992563 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:23.915711  992563 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:23.915718  992563 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:23.915776  992563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:23.926246  992563 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:23.927336  992563 kubeconfig.go:125] found "default-k8s-diff-port-440341" server: "https://192.168.61.88:8444"
	I0314 19:25:23.929693  992563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:23.940022  992563 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.88
	I0314 19:25:23.940053  992563 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:23.940067  992563 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:23.940135  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.982828  992563 cri.go:89] found id: ""
	I0314 19:25:23.982911  992563 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:24.001146  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:24.014973  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:24.015016  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:24.015069  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:25:24.024883  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:24.024954  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:24.034932  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:25:24.044680  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:24.044737  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:24.054865  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.064375  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:24.064440  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.075503  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:25:24.085139  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:24.085181  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:24.096092  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:24.106907  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.238605  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.990111  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.246192  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.325019  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.458340  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:25.458512  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.959178  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.459441  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.543678  992563 api_server.go:72] duration metric: took 1.085336822s to wait for apiserver process to appear ...
	I0314 19:25:26.543708  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:26.543734  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:26.544332  992563 api_server.go:269] stopped: https://192.168.61.88:8444/healthz: Get "https://192.168.61.88:8444/healthz": dial tcp 192.168.61.88:8444: connect: connection refused
	I0314 19:25:24.407953  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:26.408497  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:24.119417  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:24.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.120173  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.619362  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.119366  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.619644  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.119516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.619418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.120115  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.619593  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.621639  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:25.622189  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:25.622224  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:25.622129  993471 retry.go:31] will retry after 2.47317901s: waiting for machine to come up
	I0314 19:25:28.097250  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:28.097610  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:28.097640  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:28.097554  993471 retry.go:31] will retry after 2.923437953s: waiting for machine to come up
	I0314 19:25:27.044437  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.729256  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.729296  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:29.729321  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.752124  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.752162  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:30.044560  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.049804  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.049846  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:30.544454  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.558197  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.558237  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.043868  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.050468  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:31.050497  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.544657  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.549640  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:25:31.561049  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:25:31.561080  992563 api_server.go:131] duration metric: took 5.017362991s to wait for apiserver health ...
	I0314 19:25:31.561091  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:31.561101  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:31.563012  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:31.564434  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:25:31.594766  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:25:31.618252  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:25:31.632693  992563 system_pods.go:59] 8 kube-system pods found
	I0314 19:25:31.632743  992563 system_pods.go:61] "coredns-5dd5756b68-bkfks" [c4bc8ea9-9a0f-43df-9916-a9a7e42fc4e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:25:31.632752  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [936bfbcb-333a-45db-9cd1-b152c14bc623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:25:31.632758  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [0533b8e8-e66a-4f38-8d55-c813446a4406] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:25:31.632768  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [b31998b9-b575-430b-918e-b9c4a7c626d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:25:31.632773  992563 system_pods.go:61] "kube-proxy-249fd" [f8bafea7-bc78-4e48-ad55-3b913c3e2fd1] Running
	I0314 19:25:31.632778  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [99c2fc5a-61a5-4813-9042-dac771932708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:25:31.632786  992563 system_pods.go:61] "metrics-server-57f55c9bc5-t2hhv" [03b6608b-bea1-4605-b85d-c09f2c744118] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:25:31.632801  992563 system_pods.go:61] "storage-provisioner" [ec6d3122-f9c6-4f14-bc66-7cab18b88fa5] Running
	I0314 19:25:31.632811  992563 system_pods.go:74] duration metric: took 14.536847ms to wait for pod list to return data ...
	I0314 19:25:31.632818  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:25:31.636580  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:25:31.636606  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:25:31.636618  992563 node_conditions.go:105] duration metric: took 3.793367ms to run NodePressure ...
	I0314 19:25:31.636635  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:28.907100  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:30.908031  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:29.119861  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:29.620287  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.120113  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.619452  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.120315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.619667  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.120221  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.120292  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.619449  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.022404  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:31.022914  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:31.022950  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:31.022850  993471 retry.go:31] will retry after 4.138449888s: waiting for machine to come up
	I0314 19:25:31.874889  992563 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879729  992563 kubeadm.go:733] kubelet initialised
	I0314 19:25:31.879757  992563 kubeadm.go:734] duration metric: took 4.834353ms waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879768  992563 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:25:31.884949  992563 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.890443  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890467  992563 pod_ready.go:81] duration metric: took 5.495766ms for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.890475  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890485  992563 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.895241  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895275  992563 pod_ready.go:81] duration metric: took 4.778217ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.895289  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895300  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.900184  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900204  992563 pod_ready.go:81] duration metric: took 4.895049ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.900222  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900228  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.023193  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023224  992563 pod_ready.go:81] duration metric: took 122.987086ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:32.023236  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023242  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423939  992563 pod_ready.go:92] pod "kube-proxy-249fd" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:32.423972  992563 pod_ready.go:81] duration metric: took 400.720648ms for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423988  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:34.431140  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:36.432871  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:33.408652  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.906834  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:37.914444  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.165792  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166342  991880 main.go:141] libmachine: (no-preload-731976) Found IP for machine: 192.168.39.148
	I0314 19:25:35.166372  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has current primary IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166382  991880 main.go:141] libmachine: (no-preload-731976) Reserving static IP address...
	I0314 19:25:35.166707  991880 main.go:141] libmachine: (no-preload-731976) Reserved static IP address: 192.168.39.148
	I0314 19:25:35.166727  991880 main.go:141] libmachine: (no-preload-731976) Waiting for SSH to be available...
	I0314 19:25:35.166748  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.166781  991880 main.go:141] libmachine: (no-preload-731976) DBG | skip adding static IP to network mk-no-preload-731976 - found existing host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"}
	I0314 19:25:35.166800  991880 main.go:141] libmachine: (no-preload-731976) DBG | Getting to WaitForSSH function...
	I0314 19:25:35.169377  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169760  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.169795  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169926  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH client type: external
	I0314 19:25:35.169960  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa (-rw-------)
	I0314 19:25:35.169998  991880 main.go:141] libmachine: (no-preload-731976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:35.170022  991880 main.go:141] libmachine: (no-preload-731976) DBG | About to run SSH command:
	I0314 19:25:35.170036  991880 main.go:141] libmachine: (no-preload-731976) DBG | exit 0
	I0314 19:25:35.296417  991880 main.go:141] libmachine: (no-preload-731976) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:35.296801  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetConfigRaw
	I0314 19:25:35.297596  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.300253  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300720  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.300757  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300996  991880 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/config.json ...
	I0314 19:25:35.301205  991880 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:35.301229  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:35.301493  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.304165  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304600  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.304643  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304850  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.305119  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305292  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305468  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.305627  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.305863  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.305881  991880 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:35.421933  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:35.421969  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422269  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:25:35.422303  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422516  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.425530  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426039  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.426069  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.426476  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426646  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426807  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.426997  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.427179  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.427200  991880 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-731976 && echo "no-preload-731976" | sudo tee /etc/hostname
	I0314 19:25:35.558170  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-731976
	
	I0314 19:25:35.558216  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.561575  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562028  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.562059  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562372  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.562673  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.562874  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.563059  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.563234  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.563468  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.563495  991880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-731976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-731976/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-731976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:35.691282  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:35.691321  991880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:35.691412  991880 buildroot.go:174] setting up certificates
	I0314 19:25:35.691437  991880 provision.go:84] configureAuth start
	I0314 19:25:35.691454  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.691821  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.694807  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695223  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.695255  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695385  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.698118  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698519  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.698548  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698752  991880 provision.go:143] copyHostCerts
	I0314 19:25:35.698834  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:35.698872  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:35.698922  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:35.699019  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:35.699030  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:35.699051  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:35.699114  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:35.699156  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:35.699177  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:35.699240  991880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.no-preload-731976 san=[127.0.0.1 192.168.39.148 localhost minikube no-preload-731976]
	I0314 19:25:35.915177  991880 provision.go:177] copyRemoteCerts
	I0314 19:25:35.915240  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:35.915265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.918112  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918468  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.918499  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918607  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.918813  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.918989  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.919161  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.003712  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 19:25:36.037023  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:36.068063  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:36.101448  991880 provision.go:87] duration metric: took 409.997228ms to configureAuth
	I0314 19:25:36.101475  991880 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:36.101691  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:25:36.101783  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.104700  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.105138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105310  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.105536  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105733  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105885  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.106088  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.106325  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.106345  991880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:36.387809  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:36.387841  991880 machine.go:97] duration metric: took 1.086620225s to provisionDockerMachine
	I0314 19:25:36.387855  991880 start.go:293] postStartSetup for "no-preload-731976" (driver="kvm2")
	I0314 19:25:36.387869  991880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:36.387886  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.388286  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:36.388316  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.391292  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391742  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.391774  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391959  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.392203  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.392450  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.392637  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.477050  991880 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:36.482184  991880 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:36.482205  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:36.482270  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:36.482372  991880 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:36.482459  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:36.492716  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:36.520655  991880 start.go:296] duration metric: took 132.783495ms for postStartSetup
	I0314 19:25:36.520723  991880 fix.go:56] duration metric: took 20.459188473s for fixHost
	I0314 19:25:36.520761  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.523718  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.524138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524431  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.524648  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.524842  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.525031  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.525211  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.525425  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.525436  991880 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:36.633356  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444336.610892497
	
	I0314 19:25:36.633389  991880 fix.go:216] guest clock: 1710444336.610892497
	I0314 19:25:36.633400  991880 fix.go:229] Guest: 2024-03-14 19:25:36.610892497 +0000 UTC Remote: 2024-03-14 19:25:36.520738659 +0000 UTC m=+367.687364006 (delta=90.153838ms)
	I0314 19:25:36.633445  991880 fix.go:200] guest clock delta is within tolerance: 90.153838ms
	I0314 19:25:36.633457  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 20.57197992s
	I0314 19:25:36.633490  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.633778  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:36.636556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.636959  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.636991  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.637190  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637708  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637871  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637934  991880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:36.638009  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.638075  991880 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:36.638104  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.640821  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641207  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641236  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641272  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641489  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.641668  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641863  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641961  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.642002  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.642188  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.642394  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.642606  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.753962  991880 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:36.761020  991880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:36.916046  991880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:36.923607  991880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:36.923688  991880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:36.941685  991880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:36.941710  991880 start.go:494] detecting cgroup driver to use...
	I0314 19:25:36.941776  991880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:36.962019  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:36.977917  991880 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:36.977982  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:36.995378  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:37.010859  991880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:37.145828  991880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:37.310805  991880 docker.go:233] disabling docker service ...
	I0314 19:25:37.310893  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:37.327346  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:37.342143  991880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:37.485925  991880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:37.607814  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:37.623068  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:37.644387  991880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:37.644455  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.655919  991880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:37.655992  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.669290  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.681601  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.694022  991880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:37.705793  991880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:37.716260  991880 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:37.716307  991880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:37.732112  991880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:37.749555  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:37.868548  991880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:38.023735  991880 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:38.023821  991880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:38.029414  991880 start.go:562] Will wait 60s for crictl version
	I0314 19:25:38.029481  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.033985  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:38.077012  991880 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:38.077102  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.109155  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.146003  991880 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 19:25:34.119724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:34.620261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.119543  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.620151  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.119893  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.619442  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.119326  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.619427  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.119766  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.619711  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.147344  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:38.149841  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150180  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:38.150217  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150608  991880 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:38.155598  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:38.172048  991880 kubeadm.go:877] updating cluster {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:38.172187  991880 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 19:25:38.172260  991880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:38.220190  991880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 19:25:38.220232  991880 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:25:38.220291  991880 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.220313  991880 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.220345  991880 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.220378  991880 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 19:25:38.220395  991880 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.220486  991880 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.220484  991880 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.220724  991880 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.221960  991880 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 19:25:38.222035  991880 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.222177  991880 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.222230  991880 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.222271  991880 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.222272  991880 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.372514  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 19:25:38.384051  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.388330  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.395017  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.397902  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.409638  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.431681  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.501339  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.590670  991880 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 19:25:38.590775  991880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 19:25:38.590838  991880 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 19:25:38.590853  991880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.590860  991880 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590797  991880 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.591036  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627618  991880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 19:25:38.627667  991880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.627716  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627732  991880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 19:25:38.627769  991880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.627826  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648107  991880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 19:25:38.648128  991880 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648279  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.648277  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.648335  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.648346  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.648374  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.783957  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 19:25:38.784024  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.784071  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.784097  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.784197  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.788609  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788695  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788719  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 19:25:38.788782  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.788797  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:38.788856  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.788931  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.849452  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 19:25:38.849488  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 19:25:38.849505  991880 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849554  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849563  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:38.849617  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 19:25:38.849624  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.849552  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849645  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849672  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849739  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.854753  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 19:25:38.933788  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.435999  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:39.436021  992563 pod_ready.go:81] duration metric: took 7.012025071s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:39.436031  992563 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:41.445517  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:40.407508  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:42.907630  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.120157  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:39.620116  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.119693  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.120192  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.619323  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.119637  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.619724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.619799  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.952723  991880 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (4.102947708s)
	I0314 19:25:42.952761  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 19:25:42.952762  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103172862s)
	I0314 19:25:42.952791  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 19:25:42.952821  991880 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:42.952878  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:43.943582  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.945997  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.407780  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:44.119609  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:44.619260  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.119599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.619665  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.120008  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.619297  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.119435  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.619512  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.119521  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.619320  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.022375  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.069465444s)
	I0314 19:25:45.022413  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 19:25:45.022458  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:45.022539  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:48.091412  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.068839048s)
	I0314 19:25:48.091449  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 19:25:48.091482  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.091536  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.451322  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.944057  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:47.957506  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.408381  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:52.906494  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:49.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.619796  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.120279  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.619408  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.120076  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.619516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.119566  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.620268  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.120329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.619847  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.657504  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.565934426s)
	I0314 19:25:49.657542  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 19:25:49.657578  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:49.657646  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:52.134720  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477038469s)
	I0314 19:25:52.134760  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 19:25:52.134794  991880 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:52.134888  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:53.095193  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 19:25:53.095258  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.095337  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.447791  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:55.944276  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.907556  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:57.406376  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.119981  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.620180  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.119616  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.619375  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.119240  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.619922  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.120288  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.119329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.620315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.949310  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.853945012s)
	I0314 19:25:54.949346  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 19:25:54.949374  991880 cache_images.go:123] Successfully loaded all cached images
	I0314 19:25:54.949385  991880 cache_images.go:92] duration metric: took 16.729134981s to LoadCachedImages
	I0314 19:25:54.949398  991880 kubeadm.go:928] updating node { 192.168.39.148 8443 v1.29.0-rc.2 crio true true} ...
	I0314 19:25:54.949542  991880 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-731976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:54.949667  991880 ssh_runner.go:195] Run: crio config
	I0314 19:25:55.001838  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:25:55.001869  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:55.001885  991880 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:55.001916  991880 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.148 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-731976 NodeName:no-preload-731976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:55.002121  991880 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-731976"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:55.002212  991880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 19:25:55.014769  991880 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:55.014842  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:55.026082  991880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 19:25:55.049071  991880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 19:25:55.071131  991880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 19:25:55.093566  991880 ssh_runner.go:195] Run: grep 192.168.39.148	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:55.098332  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:55.113424  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:55.260159  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:55.283145  991880 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976 for IP: 192.168.39.148
	I0314 19:25:55.283174  991880 certs.go:194] generating shared ca certs ...
	I0314 19:25:55.283197  991880 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:55.283377  991880 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:55.283441  991880 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:55.283455  991880 certs.go:256] generating profile certs ...
	I0314 19:25:55.283564  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.key
	I0314 19:25:55.283661  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key.5587cb42
	I0314 19:25:55.283720  991880 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key
	I0314 19:25:55.283895  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:55.283948  991880 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:55.283962  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:55.283993  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:55.284031  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:55.284066  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:55.284121  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:55.284976  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:55.326779  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:55.376167  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:55.405828  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:55.458807  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 19:25:55.494051  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:55.531015  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:55.559184  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:55.588905  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:55.616661  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:55.646728  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:55.673995  991880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:55.692276  991880 ssh_runner.go:195] Run: openssl version
	I0314 19:25:55.698918  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:55.711703  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717107  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717177  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.723435  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:55.736575  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:55.749982  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755614  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755680  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.762122  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:55.774447  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:55.786787  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791855  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791901  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.798041  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:55.810324  991880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:55.815698  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:55.822389  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:55.829046  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:55.835660  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:55.843075  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:55.849353  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:55.855678  991880 kubeadm.go:391] StartCluster: {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:55.855799  991880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:55.855834  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.906341  991880 cri.go:89] found id: ""
	I0314 19:25:55.906408  991880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:55.918790  991880 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:55.918819  991880 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:55.918826  991880 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:55.918875  991880 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:55.929988  991880 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:55.931422  991880 kubeconfig.go:125] found "no-preload-731976" server: "https://192.168.39.148:8443"
	I0314 19:25:55.933865  991880 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:55.946711  991880 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.148
	I0314 19:25:55.946743  991880 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:55.946757  991880 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:55.946812  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.998884  991880 cri.go:89] found id: ""
	I0314 19:25:55.998971  991880 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:56.018919  991880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:56.030467  991880 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:56.030497  991880 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:56.030558  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:56.041403  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:56.041465  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:56.052140  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:56.062366  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:56.062420  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:56.075847  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.086246  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:56.086295  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.097148  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:56.106718  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:56.106756  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:56.118337  991880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:56.131893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:56.264399  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.282634  991880 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.018196302s)
	I0314 19:25:57.282664  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.524172  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.626554  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.772151  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:57.772255  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.273445  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.772397  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.803788  991880 api_server.go:72] duration metric: took 1.031637073s to wait for apiserver process to appear ...
	I0314 19:25:58.803816  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:58.803835  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:25:58.804392  991880 api_server.go:269] stopped: https://192.168.39.148:8443/healthz: Get "https://192.168.39.148:8443/healthz": dial tcp 192.168.39.148:8443: connect: connection refused
	I0314 19:25:58.445134  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:00.447429  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.304059  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.588183  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.588231  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.588251  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.632993  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.633030  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.804404  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.862306  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:01.862370  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.304525  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.309902  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:02.309933  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.804296  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.812245  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:26:02.830235  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:26:02.830268  991880 api_server.go:131] duration metric: took 4.026443836s to wait for apiserver health ...
	I0314 19:26:02.830281  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:26:02.830289  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:26:02.832051  991880 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:59.407314  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:01.906570  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.120306  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:59.620183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.119877  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.619283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.119314  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.620175  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:02.120113  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:02.120198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:02.173354  992344 cri.go:89] found id: ""
	I0314 19:26:02.173388  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.173421  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:02.173430  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:02.173509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:02.213519  992344 cri.go:89] found id: ""
	I0314 19:26:02.213555  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.213567  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:02.213574  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:02.213689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:02.259387  992344 cri.go:89] found id: ""
	I0314 19:26:02.259423  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.259435  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:02.259443  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:02.259511  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:02.308335  992344 cri.go:89] found id: ""
	I0314 19:26:02.308362  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.308373  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:02.308381  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:02.308441  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:02.353065  992344 cri.go:89] found id: ""
	I0314 19:26:02.353092  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.353101  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:02.353106  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:02.353183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:02.394305  992344 cri.go:89] found id: ""
	I0314 19:26:02.394342  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.394355  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:02.394365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:02.394443  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:02.441693  992344 cri.go:89] found id: ""
	I0314 19:26:02.441731  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.441743  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:02.441751  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:02.441816  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:02.479786  992344 cri.go:89] found id: ""
	I0314 19:26:02.479810  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.479818  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:02.479827  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:02.479858  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:02.494835  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:02.494865  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:02.660069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:02.660114  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:02.660134  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:02.732148  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:02.732187  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:02.780910  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:02.780942  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:02.833411  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:26:02.852441  991880 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:26:02.875033  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:26:02.891957  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:26:02.892003  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:26:02.892016  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:26:02.892034  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:26:02.892045  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:26:02.892062  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:26:02.892072  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:26:02.892087  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:26:02.892098  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:26:02.892109  991880 system_pods.go:74] duration metric: took 17.053651ms to wait for pod list to return data ...
	I0314 19:26:02.892122  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:26:02.896049  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:26:02.896076  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:26:02.896087  991880 node_conditions.go:105] duration metric: took 3.958558ms to run NodePressure ...
	I0314 19:26:02.896104  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:26:03.183167  991880 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187696  991880 kubeadm.go:733] kubelet initialised
	I0314 19:26:03.187722  991880 kubeadm.go:734] duration metric: took 4.517639ms waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187734  991880 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:03.193263  991880 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.198068  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198100  991880 pod_ready.go:81] duration metric: took 4.803067ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.198112  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198125  991880 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.202418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202440  991880 pod_ready.go:81] duration metric: took 4.299898ms for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.202453  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202458  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.207418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207440  991880 pod_ready.go:81] duration metric: took 4.975588ms for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.207447  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207453  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.278880  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278907  991880 pod_ready.go:81] duration metric: took 71.446692ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.278916  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278922  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.679262  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679298  991880 pod_ready.go:81] duration metric: took 400.3668ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.679308  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679315  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.078953  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.078992  991880 pod_ready.go:81] duration metric: took 399.668454ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.079014  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.079023  991880 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.479041  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479069  991880 pod_ready.go:81] duration metric: took 400.034338ms for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.479078  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479084  991880 pod_ready.go:38] duration metric: took 1.291340313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:04.479109  991880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:26:04.493423  991880 ops.go:34] apiserver oom_adj: -16
	I0314 19:26:04.493444  991880 kubeadm.go:591] duration metric: took 8.574611355s to restartPrimaryControlPlane
	I0314 19:26:04.493451  991880 kubeadm.go:393] duration metric: took 8.63778247s to StartCluster
	I0314 19:26:04.493495  991880 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.493576  991880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:26:04.495275  991880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.495648  991880 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:26:04.497346  991880 out.go:177] * Verifying Kubernetes components...
	I0314 19:26:04.495716  991880 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:26:04.495843  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:26:04.498678  991880 addons.go:69] Setting storage-provisioner=true in profile "no-preload-731976"
	I0314 19:26:04.498694  991880 addons.go:69] Setting metrics-server=true in profile "no-preload-731976"
	I0314 19:26:04.498719  991880 addons.go:234] Setting addon metrics-server=true in "no-preload-731976"
	I0314 19:26:04.498724  991880 addons.go:234] Setting addon storage-provisioner=true in "no-preload-731976"
	W0314 19:26:04.498725  991880 addons.go:243] addon metrics-server should already be in state true
	W0314 19:26:04.498735  991880 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:26:04.498755  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498764  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498685  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:26:04.498684  991880 addons.go:69] Setting default-storageclass=true in profile "no-preload-731976"
	I0314 19:26:04.498902  991880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-731976"
	I0314 19:26:04.499116  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499128  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499146  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499151  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499275  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499306  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.515926  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42749
	I0314 19:26:04.516541  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.517336  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.517380  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.517903  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.518496  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.518530  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.519804  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0314 19:26:04.519877  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
	I0314 19:26:04.520294  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520433  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520844  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.520874  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521224  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.521279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.521294  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521512  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.521839  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.522431  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.522462  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.524933  991880 addons.go:234] Setting addon default-storageclass=true in "no-preload-731976"
	W0314 19:26:04.524953  991880 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:26:04.524977  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.525238  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.525267  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.535073  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I0314 19:26:04.535555  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.536084  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.536110  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.536455  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.536608  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.537991  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0314 19:26:04.538320  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.538560  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.540272  991880 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:26:04.539087  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.541445  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.541556  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:26:04.541574  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:26:04.541590  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.541837  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.542000  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.544178  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.544425  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0314 19:26:04.545832  991880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:26:04.544882  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.544974  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.545887  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.545912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.545529  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.547028  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.547051  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.546085  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.547153  991880 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.547187  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:26:04.547205  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.547263  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.547420  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.547492  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.548137  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.548250  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.549851  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550280  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.550310  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550441  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.550642  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.550806  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.550933  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.594092  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0314 19:26:04.594507  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.595046  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.595068  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.595380  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.595600  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.597532  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.597819  991880 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.597841  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:26:04.597860  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.600392  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600790  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.600822  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600932  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.601112  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.601282  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.601422  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.698561  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:26:04.717893  991880 node_ready.go:35] waiting up to 6m0s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:04.789158  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.874271  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:26:04.874299  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:26:04.897643  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.915424  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:26:04.915447  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:26:04.962912  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:04.962936  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:26:05.037223  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:05.140432  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140464  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.140791  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.140832  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.140858  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140873  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.141237  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.141256  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.141341  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:05.147523  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.147539  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.147796  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.147815  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.021360  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.123678174s)
	I0314 19:26:06.021425  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.021439  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023327  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.023341  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.023364  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023384  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.023398  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023662  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.025042  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023698  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.063870  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026588061s)
	I0314 19:26:06.063950  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.063961  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064301  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.064325  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064381  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064395  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.064404  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064668  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064685  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064698  991880 addons.go:470] Verifying addon metrics-server=true in "no-preload-731976"
	I0314 19:26:06.066642  991880 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:26:02.945917  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.446120  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:03.906603  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.908049  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.359638  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:05.377722  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:05.377799  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:05.436279  992344 cri.go:89] found id: ""
	I0314 19:26:05.436316  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.436330  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:05.436338  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:05.436402  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:05.482775  992344 cri.go:89] found id: ""
	I0314 19:26:05.482822  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.482853  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:05.482861  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:05.482933  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:05.542954  992344 cri.go:89] found id: ""
	I0314 19:26:05.542986  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.542996  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:05.543003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:05.543069  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:05.582596  992344 cri.go:89] found id: ""
	I0314 19:26:05.582630  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.582643  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:05.582651  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:05.582716  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:05.623720  992344 cri.go:89] found id: ""
	I0314 19:26:05.623750  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.623762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:05.623770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:05.623828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:05.669868  992344 cri.go:89] found id: ""
	I0314 19:26:05.669946  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.669962  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:05.669974  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:05.670045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:05.718786  992344 cri.go:89] found id: ""
	I0314 19:26:05.718816  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.718827  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:05.718834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:05.718905  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:05.761781  992344 cri.go:89] found id: ""
	I0314 19:26:05.761817  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.761828  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:05.761841  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:05.761856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:05.826095  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:05.826131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:05.842893  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:05.842928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:05.937536  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:05.937567  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:05.937585  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:06.013419  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:06.013465  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:08.560995  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:08.576897  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:08.576964  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:08.617367  992344 cri.go:89] found id: ""
	I0314 19:26:08.617395  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.617406  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:08.617412  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:08.617471  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:08.655448  992344 cri.go:89] found id: ""
	I0314 19:26:08.655480  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.655492  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:08.655498  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:08.656004  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:08.696167  992344 cri.go:89] found id: ""
	I0314 19:26:08.696197  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.696206  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:08.696231  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:08.696294  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:06.067992  991880 addons.go:505] duration metric: took 1.572277081s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:26:06.722889  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:07.943306  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:09.945181  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.407517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:10.908715  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.736061  992344 cri.go:89] found id: ""
	I0314 19:26:08.736088  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.736096  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:08.736102  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:08.736168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:08.782458  992344 cri.go:89] found id: ""
	I0314 19:26:08.782490  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.782501  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:08.782508  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:08.782585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:08.833616  992344 cri.go:89] found id: ""
	I0314 19:26:08.833647  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.833659  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:08.833667  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:08.833734  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:08.875871  992344 cri.go:89] found id: ""
	I0314 19:26:08.875900  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.875909  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:08.875914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:08.875972  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:08.921763  992344 cri.go:89] found id: ""
	I0314 19:26:08.921793  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.921804  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:08.921816  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:08.921834  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:08.937716  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:08.937748  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:09.024271  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.024295  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:09.024309  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:09.098600  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:09.098636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:09.146178  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:09.146226  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:11.698261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:11.715209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:11.715285  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:11.758631  992344 cri.go:89] found id: ""
	I0314 19:26:11.758664  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.758680  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:11.758688  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:11.758758  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:11.798229  992344 cri.go:89] found id: ""
	I0314 19:26:11.798258  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.798268  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:11.798274  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:11.798341  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:11.838801  992344 cri.go:89] found id: ""
	I0314 19:26:11.838837  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.838849  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:11.838857  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:11.838925  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:11.884460  992344 cri.go:89] found id: ""
	I0314 19:26:11.884495  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.884507  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:11.884515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:11.884577  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:11.937743  992344 cri.go:89] found id: ""
	I0314 19:26:11.937770  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.937781  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:11.937789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:11.937852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:12.007509  992344 cri.go:89] found id: ""
	I0314 19:26:12.007542  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.007552  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:12.007561  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:12.007640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:12.068478  992344 cri.go:89] found id: ""
	I0314 19:26:12.068514  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.068523  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:12.068529  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:12.068592  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:12.108658  992344 cri.go:89] found id: ""
	I0314 19:26:12.108699  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.108712  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:12.108725  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:12.108754  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:12.195134  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:12.195170  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:12.240710  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:12.240746  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:12.297470  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:12.297506  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:12.312552  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:12.312581  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:12.392069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.222189  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:11.223717  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:12.226297  991880 node_ready.go:49] node "no-preload-731976" has status "Ready":"True"
	I0314 19:26:12.226328  991880 node_ready.go:38] duration metric: took 7.508398002s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:12.226343  991880 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:12.234015  991880 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242287  991880 pod_ready.go:92] pod "coredns-76f75df574-mcddh" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:12.242314  991880 pod_ready.go:81] duration metric: took 8.261811ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242325  991880 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252237  991880 pod_ready.go:92] pod "etcd-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:13.252268  991880 pod_ready.go:81] duration metric: took 1.00993426s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252277  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.443709  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.943804  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:13.407905  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:15.906891  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.907361  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.893036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:14.909532  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:14.909603  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:14.958974  992344 cri.go:89] found id: ""
	I0314 19:26:14.959001  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.959010  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:14.959016  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:14.959071  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:14.996462  992344 cri.go:89] found id: ""
	I0314 19:26:14.996496  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.996509  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:14.996516  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:14.996584  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:15.038159  992344 cri.go:89] found id: ""
	I0314 19:26:15.038192  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.038200  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:15.038214  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:15.038280  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:15.077455  992344 cri.go:89] found id: ""
	I0314 19:26:15.077486  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.077498  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:15.077506  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:15.077595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:15.117873  992344 cri.go:89] found id: ""
	I0314 19:26:15.117905  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.117914  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:15.117921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:15.117984  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:15.156493  992344 cri.go:89] found id: ""
	I0314 19:26:15.156528  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.156541  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:15.156549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:15.156615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:15.195036  992344 cri.go:89] found id: ""
	I0314 19:26:15.195065  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.195073  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:15.195079  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:15.195131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:15.237570  992344 cri.go:89] found id: ""
	I0314 19:26:15.237607  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.237619  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:15.237631  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:15.237646  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:15.323818  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:15.323871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:15.370068  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:15.370110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:15.425984  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:15.426018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.442475  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:15.442513  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:15.519714  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.019937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:18.036457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:18.036534  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:18.076226  992344 cri.go:89] found id: ""
	I0314 19:26:18.076256  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.076268  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:18.076275  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:18.076339  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:18.112355  992344 cri.go:89] found id: ""
	I0314 19:26:18.112390  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.112401  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:18.112409  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:18.112475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:18.148502  992344 cri.go:89] found id: ""
	I0314 19:26:18.148533  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.148544  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:18.148551  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:18.148625  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:18.185085  992344 cri.go:89] found id: ""
	I0314 19:26:18.185114  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.185121  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:18.185127  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:18.185192  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:18.226487  992344 cri.go:89] found id: ""
	I0314 19:26:18.226512  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.226520  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:18.226527  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:18.226595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:18.274014  992344 cri.go:89] found id: ""
	I0314 19:26:18.274044  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.274053  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:18.274062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:18.274155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:18.318696  992344 cri.go:89] found id: ""
	I0314 19:26:18.318729  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.318741  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:18.318749  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:18.318821  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:18.361430  992344 cri.go:89] found id: ""
	I0314 19:26:18.361459  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.361467  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:18.361477  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:18.361489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:18.442041  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.442062  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:18.442082  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:18.522821  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:18.522863  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:18.565896  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:18.565935  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:18.620887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:18.620924  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.268738  991880 pod_ready.go:102] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:16.758759  991880 pod_ready.go:92] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.758794  991880 pod_ready.go:81] duration metric: took 3.50650262s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.758807  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.763984  991880 pod_ready.go:92] pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.764010  991880 pod_ready.go:81] duration metric: took 5.192518ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.764021  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770418  991880 pod_ready.go:92] pod "kube-proxy-fkn7b" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.770442  991880 pod_ready.go:81] duration metric: took 6.412988ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770453  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775342  991880 pod_ready.go:92] pod "kube-scheduler-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.775367  991880 pod_ready.go:81] duration metric: took 4.906261ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775378  991880 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:18.782444  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.443755  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.446058  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.907866  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.407195  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.136379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:21.153065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:21.153159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:21.198345  992344 cri.go:89] found id: ""
	I0314 19:26:21.198376  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.198386  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:21.198393  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:21.198465  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:21.240699  992344 cri.go:89] found id: ""
	I0314 19:26:21.240738  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.240747  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:21.240753  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:21.240805  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:21.280891  992344 cri.go:89] found id: ""
	I0314 19:26:21.280978  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.280994  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:21.281004  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:21.281074  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:21.320316  992344 cri.go:89] found id: ""
	I0314 19:26:21.320348  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.320360  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:21.320369  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:21.320428  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:21.367972  992344 cri.go:89] found id: ""
	I0314 19:26:21.368006  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.368018  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:21.368024  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:21.368091  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:21.406060  992344 cri.go:89] found id: ""
	I0314 19:26:21.406090  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.406101  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:21.406108  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:21.406175  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:21.450885  992344 cri.go:89] found id: ""
	I0314 19:26:21.450908  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.450927  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:21.450933  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:21.450992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:21.497391  992344 cri.go:89] found id: ""
	I0314 19:26:21.497424  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.497436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:21.497453  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:21.497471  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:21.547789  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:21.547819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:21.604433  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:21.604482  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:21.619977  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:21.620005  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:21.695604  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:21.695629  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:21.695643  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:20.782765  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.786856  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.943234  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:23.943336  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:25.944005  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.407901  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:26.906562  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.274618  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:24.290815  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:24.290891  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:24.330657  992344 cri.go:89] found id: ""
	I0314 19:26:24.330694  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.330706  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:24.330718  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:24.330788  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:24.373140  992344 cri.go:89] found id: ""
	I0314 19:26:24.373192  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.373206  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:24.373214  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:24.373295  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:24.412131  992344 cri.go:89] found id: ""
	I0314 19:26:24.412161  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.412183  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:24.412191  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:24.412281  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:24.453506  992344 cri.go:89] found id: ""
	I0314 19:26:24.453535  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.453546  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:24.453554  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:24.453621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:24.495345  992344 cri.go:89] found id: ""
	I0314 19:26:24.495379  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.495391  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:24.495399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:24.495468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:24.534744  992344 cri.go:89] found id: ""
	I0314 19:26:24.534770  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.534779  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:24.534785  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:24.534847  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:24.573594  992344 cri.go:89] found id: ""
	I0314 19:26:24.573621  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.573629  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:24.573635  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:24.573685  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:24.612677  992344 cri.go:89] found id: ""
	I0314 19:26:24.612708  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.612718  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:24.612730  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:24.612747  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:24.664393  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:24.664426  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:24.679911  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:24.679945  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:24.767513  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:24.767560  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:24.767580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:24.853448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:24.853491  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.398576  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:27.414665  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:27.414749  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:27.461901  992344 cri.go:89] found id: ""
	I0314 19:26:27.461930  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.461938  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:27.461944  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:27.462009  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:27.502865  992344 cri.go:89] found id: ""
	I0314 19:26:27.502893  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.502902  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:27.502908  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:27.502966  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:27.542327  992344 cri.go:89] found id: ""
	I0314 19:26:27.542374  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.542387  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:27.542396  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:27.542484  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:27.583269  992344 cri.go:89] found id: ""
	I0314 19:26:27.583295  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.583304  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:27.583310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:27.583375  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:27.620426  992344 cri.go:89] found id: ""
	I0314 19:26:27.620467  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.620483  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:27.620491  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:27.620560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:27.659165  992344 cri.go:89] found id: ""
	I0314 19:26:27.659198  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.659214  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:27.659222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:27.659291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:27.701565  992344 cri.go:89] found id: ""
	I0314 19:26:27.701600  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.701609  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:27.701615  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:27.701706  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:27.739782  992344 cri.go:89] found id: ""
	I0314 19:26:27.739813  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.739822  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:27.739832  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:27.739847  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:27.757112  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:27.757146  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:27.844634  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:27.844670  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:27.844688  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:27.928687  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:27.928720  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.976582  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:27.976614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:25.282663  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:27.783551  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.443159  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.943660  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.908305  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.908486  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.536573  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:30.551552  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:30.551624  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.590498  992344 cri.go:89] found id: ""
	I0314 19:26:30.590528  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.590541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:30.590550  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:30.590612  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:30.629891  992344 cri.go:89] found id: ""
	I0314 19:26:30.629922  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.629945  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:30.629960  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:30.630031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:30.672557  992344 cri.go:89] found id: ""
	I0314 19:26:30.672592  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.672604  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:30.672611  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:30.672675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:30.709889  992344 cri.go:89] found id: ""
	I0314 19:26:30.709998  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.710026  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:30.710034  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:30.710103  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:30.749044  992344 cri.go:89] found id: ""
	I0314 19:26:30.749078  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.749090  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:30.749097  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:30.749167  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:30.794111  992344 cri.go:89] found id: ""
	I0314 19:26:30.794136  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.794146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:30.794154  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:30.794229  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:30.837175  992344 cri.go:89] found id: ""
	I0314 19:26:30.837204  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.837213  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:30.837220  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:30.837276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:30.875977  992344 cri.go:89] found id: ""
	I0314 19:26:30.876012  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.876026  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:30.876039  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:30.876077  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:30.965922  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:30.965963  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:31.011002  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:31.011041  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:31.067381  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:31.067415  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:31.082515  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:31.082547  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:31.158951  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:33.659376  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:33.673829  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:33.673889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.283175  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:32.285501  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.446301  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.942963  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.407396  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.906104  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.718619  992344 cri.go:89] found id: ""
	I0314 19:26:33.718655  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.718667  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:33.718675  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:33.718752  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:33.760408  992344 cri.go:89] found id: ""
	I0314 19:26:33.760443  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.760455  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:33.760463  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:33.760532  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:33.803648  992344 cri.go:89] found id: ""
	I0314 19:26:33.803683  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.803697  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:33.803706  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:33.803770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:33.845297  992344 cri.go:89] found id: ""
	I0314 19:26:33.845332  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.845344  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:33.845352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:33.845420  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:33.885826  992344 cri.go:89] found id: ""
	I0314 19:26:33.885862  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.885873  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:33.885881  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:33.885953  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:33.930611  992344 cri.go:89] found id: ""
	I0314 19:26:33.930641  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.930652  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:33.930659  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:33.930720  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:33.975523  992344 cri.go:89] found id: ""
	I0314 19:26:33.975558  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.975569  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:33.975592  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:33.975671  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:34.021004  992344 cri.go:89] found id: ""
	I0314 19:26:34.021039  992344 logs.go:276] 0 containers: []
	W0314 19:26:34.021048  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:34.021058  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:34.021072  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.066775  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:34.066808  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:34.123513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:34.123555  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:34.138355  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:34.138390  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:34.210698  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:34.210733  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:34.210752  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:36.801398  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:36.818486  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:36.818561  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:36.864485  992344 cri.go:89] found id: ""
	I0314 19:26:36.864510  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.864519  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:36.864525  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:36.864585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:36.908438  992344 cri.go:89] found id: ""
	I0314 19:26:36.908468  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.908478  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:36.908486  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:36.908554  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:36.947578  992344 cri.go:89] found id: ""
	I0314 19:26:36.947605  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.947613  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:36.947618  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:36.947664  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:36.985495  992344 cri.go:89] found id: ""
	I0314 19:26:36.985526  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.985537  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:36.985545  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:36.985609  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:37.027897  992344 cri.go:89] found id: ""
	I0314 19:26:37.027929  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.027947  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:37.027955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:37.028024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:37.066665  992344 cri.go:89] found id: ""
	I0314 19:26:37.066702  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.066716  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:37.066726  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:37.066818  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:37.104882  992344 cri.go:89] found id: ""
	I0314 19:26:37.104911  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.104920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:37.104926  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:37.104989  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:37.150288  992344 cri.go:89] found id: ""
	I0314 19:26:37.150318  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.150326  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:37.150338  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:37.150356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:37.207269  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:37.207314  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:37.222256  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:37.222290  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:37.305854  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:37.305879  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:37.305894  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:37.391306  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:37.391343  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.784650  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:37.283602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.444420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.943754  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.406563  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.407414  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.905944  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:39.939379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:39.955255  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:39.955317  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:39.996585  992344 cri.go:89] found id: ""
	I0314 19:26:39.996618  992344 logs.go:276] 0 containers: []
	W0314 19:26:39.996627  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:39.996633  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:39.996698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:40.038725  992344 cri.go:89] found id: ""
	I0314 19:26:40.038761  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.038774  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:40.038782  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:40.038846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:40.080619  992344 cri.go:89] found id: ""
	I0314 19:26:40.080656  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.080668  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:40.080677  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:40.080742  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:40.122120  992344 cri.go:89] found id: ""
	I0314 19:26:40.122163  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.122174  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:40.122182  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:40.122248  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:40.161563  992344 cri.go:89] found id: ""
	I0314 19:26:40.161594  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.161605  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:40.161612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:40.161680  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:40.200236  992344 cri.go:89] found id: ""
	I0314 19:26:40.200267  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.200278  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:40.200287  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:40.200358  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:40.237537  992344 cri.go:89] found id: ""
	I0314 19:26:40.237570  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.237581  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:40.237588  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:40.237657  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:40.293038  992344 cri.go:89] found id: ""
	I0314 19:26:40.293070  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.293078  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:40.293086  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:40.293110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:40.307710  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:40.307742  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:40.385255  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:40.385278  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:40.385312  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:40.469385  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:40.469421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:40.513030  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:40.513064  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.069286  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:43.086066  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:43.086183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:43.131373  992344 cri.go:89] found id: ""
	I0314 19:26:43.131400  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.131408  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:43.131414  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:43.131491  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:43.175283  992344 cri.go:89] found id: ""
	I0314 19:26:43.175311  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.175319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:43.175325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:43.175385  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:43.214979  992344 cri.go:89] found id: ""
	I0314 19:26:43.215006  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.215014  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:43.215020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:43.215072  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:43.252071  992344 cri.go:89] found id: ""
	I0314 19:26:43.252101  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.252110  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:43.252136  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:43.252200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:43.290310  992344 cri.go:89] found id: ""
	I0314 19:26:43.290341  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.290352  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:43.290359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:43.290426  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:43.330639  992344 cri.go:89] found id: ""
	I0314 19:26:43.330673  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.330684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:43.330692  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:43.330761  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:43.372669  992344 cri.go:89] found id: ""
	I0314 19:26:43.372698  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.372706  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:43.372712  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:43.372775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:43.416118  992344 cri.go:89] found id: ""
	I0314 19:26:43.416154  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.416171  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:43.416184  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:43.416225  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:43.501495  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:43.501541  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:43.545898  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:43.545932  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.601172  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:43.601205  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:43.616307  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:43.616339  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:43.699003  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:39.782316  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:41.783130  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:43.783259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.943853  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:45.446214  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:44.907328  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.406383  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:46.199661  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:46.214256  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:46.214325  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:46.263891  992344 cri.go:89] found id: ""
	I0314 19:26:46.263921  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.263932  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:46.263940  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:46.264006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:46.303515  992344 cri.go:89] found id: ""
	I0314 19:26:46.303542  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.303551  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:46.303558  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:46.303634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:46.346323  992344 cri.go:89] found id: ""
	I0314 19:26:46.346358  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.346371  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:46.346378  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:46.346444  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:46.388459  992344 cri.go:89] found id: ""
	I0314 19:26:46.388490  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.388500  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:46.388507  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:46.388560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:46.428907  992344 cri.go:89] found id: ""
	I0314 19:26:46.428945  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.428957  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:46.428966  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:46.429032  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:46.475683  992344 cri.go:89] found id: ""
	I0314 19:26:46.475713  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.475724  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:46.475737  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:46.475803  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:46.514509  992344 cri.go:89] found id: ""
	I0314 19:26:46.514543  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.514552  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:46.514558  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:46.514621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:46.553984  992344 cri.go:89] found id: ""
	I0314 19:26:46.554012  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.554023  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:46.554036  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:46.554054  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:46.615513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:46.615548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:46.630491  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:46.630525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:46.733214  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:46.733250  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:46.733267  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:46.832662  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:46.832699  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:45.783361  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:48.283626  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.943882  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.944184  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.409215  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.907278  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.382361  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:49.398159  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:49.398220  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:49.441989  992344 cri.go:89] found id: ""
	I0314 19:26:49.442017  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.442027  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:49.442034  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:49.442110  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:49.484456  992344 cri.go:89] found id: ""
	I0314 19:26:49.484492  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.484503  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:49.484520  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:49.484587  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:49.522409  992344 cri.go:89] found id: ""
	I0314 19:26:49.522438  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.522449  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:49.522456  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:49.522509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:49.556955  992344 cri.go:89] found id: ""
	I0314 19:26:49.556983  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.556991  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:49.556996  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:49.557045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:49.597924  992344 cri.go:89] found id: ""
	I0314 19:26:49.597960  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.597971  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:49.597987  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:49.598054  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:49.635744  992344 cri.go:89] found id: ""
	I0314 19:26:49.635780  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.635793  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:49.635801  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:49.635869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:49.678085  992344 cri.go:89] found id: ""
	I0314 19:26:49.678124  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.678136  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:49.678144  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:49.678247  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:49.714483  992344 cri.go:89] found id: ""
	I0314 19:26:49.714515  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.714527  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:49.714538  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:49.714554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:49.760438  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:49.760473  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:49.818954  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:49.818992  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:49.835609  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:49.835642  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:49.928723  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:49.928747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:49.928759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:52.517455  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:52.534986  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:52.535066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:52.580240  992344 cri.go:89] found id: ""
	I0314 19:26:52.580279  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.580292  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:52.580301  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:52.580367  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:52.644053  992344 cri.go:89] found id: ""
	I0314 19:26:52.644085  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.644096  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:52.644103  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:52.644171  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:52.706892  992344 cri.go:89] found id: ""
	I0314 19:26:52.706919  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.706928  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:52.706935  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:52.706986  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:52.761039  992344 cri.go:89] found id: ""
	I0314 19:26:52.761077  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.761090  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:52.761099  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:52.761173  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:52.806217  992344 cri.go:89] found id: ""
	I0314 19:26:52.806251  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.806263  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:52.806271  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:52.806415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:52.848417  992344 cri.go:89] found id: ""
	I0314 19:26:52.848448  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.848457  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:52.848464  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:52.848527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:52.890639  992344 cri.go:89] found id: ""
	I0314 19:26:52.890674  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.890687  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:52.890695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:52.890775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:52.934637  992344 cri.go:89] found id: ""
	I0314 19:26:52.934666  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.934677  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:52.934690  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:52.934707  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:52.949797  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:52.949825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:53.033720  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:53.033751  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:53.033766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:53.113919  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:53.113960  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:53.163483  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:53.163525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:50.781924  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:52.788346  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.945712  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:54.442871  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.443456  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:53.908184  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.407851  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:55.718119  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:55.733183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:55.733276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:55.778015  992344 cri.go:89] found id: ""
	I0314 19:26:55.778042  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.778050  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:55.778057  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:55.778146  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:55.829955  992344 cri.go:89] found id: ""
	I0314 19:26:55.829996  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.830011  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:55.830019  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:55.830089  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:55.872198  992344 cri.go:89] found id: ""
	I0314 19:26:55.872247  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.872260  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:55.872268  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:55.872327  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:55.916604  992344 cri.go:89] found id: ""
	I0314 19:26:55.916637  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.916649  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:55.916657  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:55.916725  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:55.957028  992344 cri.go:89] found id: ""
	I0314 19:26:55.957051  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.957060  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:55.957065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:55.957118  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:55.996640  992344 cri.go:89] found id: ""
	I0314 19:26:55.996671  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.996684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:55.996695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:55.996750  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:56.036638  992344 cri.go:89] found id: ""
	I0314 19:26:56.036688  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.036701  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:56.036709  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:56.036777  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:56.072594  992344 cri.go:89] found id: ""
	I0314 19:26:56.072624  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.072633  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:56.072643  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:56.072657  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:56.129011  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:56.129044  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:56.143042  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:56.143075  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:56.232545  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:56.232574  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:56.232589  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:56.317471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:56.317517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:55.282413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:57.283169  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.445079  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:00.942781  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.908918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.409159  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.864325  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:58.879029  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:58.879108  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:58.918490  992344 cri.go:89] found id: ""
	I0314 19:26:58.918519  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.918526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:58.918533  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:58.918598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:58.963392  992344 cri.go:89] found id: ""
	I0314 19:26:58.963423  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.963431  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:58.963437  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:58.963502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:59.007104  992344 cri.go:89] found id: ""
	I0314 19:26:59.007146  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.007158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:59.007166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:59.007235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:59.050075  992344 cri.go:89] found id: ""
	I0314 19:26:59.050114  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.050127  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:59.050138  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:59.050204  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:59.090262  992344 cri.go:89] found id: ""
	I0314 19:26:59.090289  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.090298  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:59.090303  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:59.090355  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:59.130556  992344 cri.go:89] found id: ""
	I0314 19:26:59.130584  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.130592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:59.130598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:59.130659  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:59.170640  992344 cri.go:89] found id: ""
	I0314 19:26:59.170670  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.170680  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:59.170689  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:59.170769  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:59.206456  992344 cri.go:89] found id: ""
	I0314 19:26:59.206494  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.206503  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:59.206513  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:59.206533  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:59.285760  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:59.285781  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:59.285793  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:59.363143  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:59.363182  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:59.415614  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:59.415655  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:59.470619  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:59.470661  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:01.987397  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:02.004152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:02.004243  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:02.050022  992344 cri.go:89] found id: ""
	I0314 19:27:02.050056  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.050068  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:02.050075  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:02.050144  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:02.089639  992344 cri.go:89] found id: ""
	I0314 19:27:02.089666  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.089674  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:02.089680  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:02.089740  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:02.128368  992344 cri.go:89] found id: ""
	I0314 19:27:02.128400  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.128409  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:02.128415  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:02.128468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:02.165609  992344 cri.go:89] found id: ""
	I0314 19:27:02.165651  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.165664  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:02.165672  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:02.165745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:02.204317  992344 cri.go:89] found id: ""
	I0314 19:27:02.204347  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.204359  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:02.204367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:02.204436  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:02.247897  992344 cri.go:89] found id: ""
	I0314 19:27:02.247931  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.247943  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:02.247951  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:02.248025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:02.287938  992344 cri.go:89] found id: ""
	I0314 19:27:02.287967  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.287979  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:02.287985  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:02.288057  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:02.324712  992344 cri.go:89] found id: ""
	I0314 19:27:02.324739  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.324751  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:02.324762  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:02.324779  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:02.400908  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:02.400932  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:02.400953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:02.489797  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:02.489830  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:02.540134  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:02.540168  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:02.599093  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:02.599128  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:59.283757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.782785  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:02.946825  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.447529  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:03.906952  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.909530  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.115036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:05.130479  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:05.130562  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:05.174573  992344 cri.go:89] found id: ""
	I0314 19:27:05.174605  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.174617  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:05.174624  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:05.174689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:05.212508  992344 cri.go:89] found id: ""
	I0314 19:27:05.212535  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.212546  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:05.212554  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:05.212621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:05.250714  992344 cri.go:89] found id: ""
	I0314 19:27:05.250750  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.250762  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:05.250770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:05.250839  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:05.291691  992344 cri.go:89] found id: ""
	I0314 19:27:05.291714  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.291722  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:05.291728  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:05.291775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:05.332275  992344 cri.go:89] found id: ""
	I0314 19:27:05.332302  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.332311  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:05.332318  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:05.332384  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:05.370048  992344 cri.go:89] found id: ""
	I0314 19:27:05.370075  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.370084  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:05.370090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:05.370163  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:05.413797  992344 cri.go:89] found id: ""
	I0314 19:27:05.413825  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.413836  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:05.413844  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:05.413909  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:05.454295  992344 cri.go:89] found id: ""
	I0314 19:27:05.454321  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.454329  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:05.454341  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:05.454359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:05.509578  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:05.509614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:05.525317  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:05.525347  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:05.607550  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:05.607576  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:05.607593  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:05.690865  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:05.690904  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.233183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:08.249612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:08.249679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:08.298188  992344 cri.go:89] found id: ""
	I0314 19:27:08.298226  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.298238  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:08.298247  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:08.298310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:08.339102  992344 cri.go:89] found id: ""
	I0314 19:27:08.339132  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.339141  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:08.339148  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:08.339208  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:08.377029  992344 cri.go:89] found id: ""
	I0314 19:27:08.377060  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.377068  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:08.377074  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:08.377131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:08.414418  992344 cri.go:89] found id: ""
	I0314 19:27:08.414450  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.414461  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:08.414468  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:08.414528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:08.454027  992344 cri.go:89] found id: ""
	I0314 19:27:08.454057  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.454068  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:08.454076  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:08.454134  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:08.494818  992344 cri.go:89] found id: ""
	I0314 19:27:08.494847  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.494856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:08.494863  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:08.494927  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:08.534522  992344 cri.go:89] found id: ""
	I0314 19:27:08.534557  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.534567  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:08.534575  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:08.534637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:08.572164  992344 cri.go:89] found id: ""
	I0314 19:27:08.572197  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.572241  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:08.572257  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:08.572275  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:08.588223  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:08.588261  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:08.675851  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:08.675877  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:08.675889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:04.282689  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:06.284132  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.783530  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:07.448817  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:09.944024  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.407848  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:10.408924  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:12.907004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.763975  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:08.764014  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.813516  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:08.813552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.370525  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:11.385556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:11.385645  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:11.426788  992344 cri.go:89] found id: ""
	I0314 19:27:11.426823  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.426831  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:11.426837  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:11.426910  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:11.465752  992344 cri.go:89] found id: ""
	I0314 19:27:11.465786  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.465794  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:11.465801  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:11.465849  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:11.506855  992344 cri.go:89] found id: ""
	I0314 19:27:11.506890  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.506904  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:11.506912  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:11.506973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:11.548844  992344 cri.go:89] found id: ""
	I0314 19:27:11.548880  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.548891  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:11.548900  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:11.548960  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:11.590828  992344 cri.go:89] found id: ""
	I0314 19:27:11.590861  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.590872  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:11.590880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:11.590952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:11.631863  992344 cri.go:89] found id: ""
	I0314 19:27:11.631892  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.631904  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:11.631913  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:11.631975  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:11.670204  992344 cri.go:89] found id: ""
	I0314 19:27:11.670230  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.670238  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:11.670244  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:11.670293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:11.711946  992344 cri.go:89] found id: ""
	I0314 19:27:11.711980  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.711991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:11.712005  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:11.712026  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.766647  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:11.766682  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:11.784449  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:11.784475  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:11.866503  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:11.866536  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:11.866552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:11.952506  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:11.952538  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:10.787653  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:13.282424  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:11.945870  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.444491  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.909465  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.918004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.502903  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:14.518020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:14.518084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:14.555486  992344 cri.go:89] found id: ""
	I0314 19:27:14.555528  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.555541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:14.555552  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:14.555615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:14.592068  992344 cri.go:89] found id: ""
	I0314 19:27:14.592102  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.592113  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:14.592121  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:14.592186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:14.633353  992344 cri.go:89] found id: ""
	I0314 19:27:14.633408  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.633418  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:14.633425  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:14.633490  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:14.670897  992344 cri.go:89] found id: ""
	I0314 19:27:14.670935  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.670947  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:14.670955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:14.671024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:14.713838  992344 cri.go:89] found id: ""
	I0314 19:27:14.713874  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.713884  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:14.713890  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:14.713957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:14.751113  992344 cri.go:89] found id: ""
	I0314 19:27:14.751143  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.751151  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:14.751158  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:14.751209  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:14.792485  992344 cri.go:89] found id: ""
	I0314 19:27:14.792518  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.792535  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:14.792542  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:14.792606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:14.839250  992344 cri.go:89] found id: ""
	I0314 19:27:14.839284  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.839297  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:14.839309  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:14.839325  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:14.880384  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:14.880421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:14.941515  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:14.941549  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:14.958810  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:14.958836  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:15.048586  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:15.048610  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:15.048625  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:17.640280  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:17.655841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:17.655901  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:17.698205  992344 cri.go:89] found id: ""
	I0314 19:27:17.698242  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.698254  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:17.698261  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:17.698315  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:17.740854  992344 cri.go:89] found id: ""
	I0314 19:27:17.740892  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.740903  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:17.740910  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:17.740980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:17.783317  992344 cri.go:89] found id: ""
	I0314 19:27:17.783409  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.783426  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:17.783434  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:17.783499  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:17.823514  992344 cri.go:89] found id: ""
	I0314 19:27:17.823541  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.823550  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:17.823556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:17.823606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:17.859249  992344 cri.go:89] found id: ""
	I0314 19:27:17.859288  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.859301  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:17.859310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:17.859386  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:17.900636  992344 cri.go:89] found id: ""
	I0314 19:27:17.900670  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.900688  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:17.900703  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:17.900770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:17.939927  992344 cri.go:89] found id: ""
	I0314 19:27:17.939959  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.939970  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:17.939979  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:17.940048  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:17.980507  992344 cri.go:89] found id: ""
	I0314 19:27:17.980539  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.980551  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:17.980563  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:17.980580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:18.037887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:18.037925  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:18.054506  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:18.054544  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:18.129987  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:18.130006  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:18.130018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:18.210364  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:18.210400  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:15.282905  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:17.283421  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.943922  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.448400  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.406315  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.407142  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:20.758599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:20.775419  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:20.775480  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:20.814427  992344 cri.go:89] found id: ""
	I0314 19:27:20.814457  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.814469  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:20.814476  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:20.814528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:20.851020  992344 cri.go:89] found id: ""
	I0314 19:27:20.851056  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.851069  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:20.851077  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:20.851150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:20.894746  992344 cri.go:89] found id: ""
	I0314 19:27:20.894775  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.894784  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:20.894790  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:20.894856  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:20.932852  992344 cri.go:89] found id: ""
	I0314 19:27:20.932884  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.932895  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:20.932903  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:20.932962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:20.977294  992344 cri.go:89] found id: ""
	I0314 19:27:20.977329  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.977341  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:20.977349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:20.977417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:21.018980  992344 cri.go:89] found id: ""
	I0314 19:27:21.019016  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.019027  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:21.019036  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:21.019102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:21.058764  992344 cri.go:89] found id: ""
	I0314 19:27:21.058817  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.058832  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:21.058841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:21.058915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:21.098126  992344 cri.go:89] found id: ""
	I0314 19:27:21.098168  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.098181  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:21.098194  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:21.098211  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:21.154456  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:21.154490  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:21.170919  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:21.170950  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:21.247945  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:21.247973  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:21.247991  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:21.345152  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:21.345193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:19.782502  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.783005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.944293  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.945097  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.442970  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.907276  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:25.907425  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:27.907517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.900146  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:23.917834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:23.917896  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:23.959759  992344 cri.go:89] found id: ""
	I0314 19:27:23.959787  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.959800  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:23.959808  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:23.959875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:23.999841  992344 cri.go:89] found id: ""
	I0314 19:27:23.999871  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.999880  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:23.999887  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:23.999942  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:24.044031  992344 cri.go:89] found id: ""
	I0314 19:27:24.044063  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.044072  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:24.044078  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:24.044149  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:24.089895  992344 cri.go:89] found id: ""
	I0314 19:27:24.089931  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.089944  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:24.089955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:24.090023  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:24.131286  992344 cri.go:89] found id: ""
	I0314 19:27:24.131319  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.131331  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:24.131338  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:24.131409  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:24.169376  992344 cri.go:89] found id: ""
	I0314 19:27:24.169408  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.169420  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:24.169428  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:24.169495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:24.215123  992344 cri.go:89] found id: ""
	I0314 19:27:24.215150  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.215159  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:24.215165  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:24.215219  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:24.257440  992344 cri.go:89] found id: ""
	I0314 19:27:24.257476  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.257484  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:24.257494  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:24.257508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.311885  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:24.311916  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:24.326375  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:24.326403  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:24.403176  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:24.403207  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:24.403227  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:24.485890  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:24.485928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.032675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:27.050221  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:27.050310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:27.091708  992344 cri.go:89] found id: ""
	I0314 19:27:27.091739  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.091750  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:27.091761  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:27.091828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:27.135279  992344 cri.go:89] found id: ""
	I0314 19:27:27.135317  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.135329  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:27.135337  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:27.135407  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:27.178163  992344 cri.go:89] found id: ""
	I0314 19:27:27.178194  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.178203  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:27.178209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:27.178259  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:27.220298  992344 cri.go:89] found id: ""
	I0314 19:27:27.220331  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.220341  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:27.220367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:27.220423  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:27.262087  992344 cri.go:89] found id: ""
	I0314 19:27:27.262122  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.262135  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:27.262143  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:27.262305  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:27.304543  992344 cri.go:89] found id: ""
	I0314 19:27:27.304576  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.304587  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:27.304597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:27.304668  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:27.343860  992344 cri.go:89] found id: ""
	I0314 19:27:27.343889  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.343899  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:27.343905  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:27.343974  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:27.383608  992344 cri.go:89] found id: ""
	I0314 19:27:27.383639  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.383649  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:27.383659  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:27.383673  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:27.398443  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:27.398478  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:27.485215  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:27.485240  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:27.485254  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:27.564067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:27.564110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.608472  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:27.608511  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.284517  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.783079  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:28.443938  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.445579  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:29.908018  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.406717  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.169228  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:30.183876  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:30.183952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:30.229357  992344 cri.go:89] found id: ""
	I0314 19:27:30.229390  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.229401  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:30.229407  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:30.229474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:30.272970  992344 cri.go:89] found id: ""
	I0314 19:27:30.273007  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.273021  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:30.273030  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:30.273116  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:30.314939  992344 cri.go:89] found id: ""
	I0314 19:27:30.314968  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.314976  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:30.314982  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:30.315031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:30.350602  992344 cri.go:89] found id: ""
	I0314 19:27:30.350633  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.350644  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:30.350652  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:30.350739  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:30.393907  992344 cri.go:89] found id: ""
	I0314 19:27:30.393939  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.393950  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:30.393958  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:30.394029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:30.431943  992344 cri.go:89] found id: ""
	I0314 19:27:30.431974  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.431983  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:30.431991  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:30.432058  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:30.471873  992344 cri.go:89] found id: ""
	I0314 19:27:30.471900  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.471910  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:30.471918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:30.471981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:30.508842  992344 cri.go:89] found id: ""
	I0314 19:27:30.508865  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.508872  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:30.508882  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:30.508896  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:30.587441  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:30.587471  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:30.587489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:30.670580  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:30.670618  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:30.719846  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:30.719882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:30.779463  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:30.779508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.296251  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:33.311393  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:33.311452  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:33.351846  992344 cri.go:89] found id: ""
	I0314 19:27:33.351879  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.351889  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:33.351898  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:33.351965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:33.396392  992344 cri.go:89] found id: ""
	I0314 19:27:33.396494  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.396523  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:33.396546  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:33.396637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:33.451093  992344 cri.go:89] found id: ""
	I0314 19:27:33.451120  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.451130  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:33.451149  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:33.451225  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:33.511427  992344 cri.go:89] found id: ""
	I0314 19:27:33.511474  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.511487  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:33.511495  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:33.511570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:33.560459  992344 cri.go:89] found id: ""
	I0314 19:27:33.560488  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.560500  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:33.560509  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:33.560579  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:33.601454  992344 cri.go:89] found id: ""
	I0314 19:27:33.601491  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.601503  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:33.601512  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:33.601588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:33.640991  992344 cri.go:89] found id: ""
	I0314 19:27:33.641029  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.641042  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:33.641050  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:33.641115  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:33.684359  992344 cri.go:89] found id: ""
	I0314 19:27:33.684390  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.684398  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:33.684412  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:33.684436  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.699551  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:33.699583  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:27:29.285022  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:31.782413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:33.782490  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.942996  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.943285  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.407243  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.407509  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:27:33.781859  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:33.781893  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:33.781909  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:33.864992  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:33.865036  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:33.911670  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:33.911712  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.466570  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:36.483515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:36.483611  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:36.522488  992344 cri.go:89] found id: ""
	I0314 19:27:36.522521  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.522533  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:36.522549  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:36.522607  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:36.561676  992344 cri.go:89] found id: ""
	I0314 19:27:36.561714  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.561728  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:36.561737  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:36.561810  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:36.604512  992344 cri.go:89] found id: ""
	I0314 19:27:36.604547  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.604559  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:36.604568  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:36.604640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:36.645387  992344 cri.go:89] found id: ""
	I0314 19:27:36.645416  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.645425  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:36.645430  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:36.645495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:36.682951  992344 cri.go:89] found id: ""
	I0314 19:27:36.682976  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.682984  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:36.682989  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:36.683040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:36.725464  992344 cri.go:89] found id: ""
	I0314 19:27:36.725517  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.725530  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:36.725538  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:36.725601  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:36.766542  992344 cri.go:89] found id: ""
	I0314 19:27:36.766578  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.766590  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:36.766598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:36.766663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:36.809745  992344 cri.go:89] found id: ""
	I0314 19:27:36.809773  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.809782  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:36.809791  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:36.809805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.863035  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:36.863069  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:36.877162  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:36.877195  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:36.952727  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:36.952747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:36.952759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:37.035914  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:37.035953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:35.783567  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:37.786255  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.944521  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.445911  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:41.446549  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:38.409392  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:40.914692  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.581600  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:39.595798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:39.595875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:39.635374  992344 cri.go:89] found id: ""
	I0314 19:27:39.635406  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.635418  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:39.635426  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:39.635488  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:39.674527  992344 cri.go:89] found id: ""
	I0314 19:27:39.674560  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.674571  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:39.674579  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:39.674649  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:39.714313  992344 cri.go:89] found id: ""
	I0314 19:27:39.714357  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.714370  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:39.714380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:39.714449  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:39.754346  992344 cri.go:89] found id: ""
	I0314 19:27:39.754383  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.754395  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:39.754402  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:39.754468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:39.799448  992344 cri.go:89] found id: ""
	I0314 19:27:39.799481  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.799493  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:39.799500  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:39.799551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:39.841550  992344 cri.go:89] found id: ""
	I0314 19:27:39.841582  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.841592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:39.841601  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:39.841673  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:39.878581  992344 cri.go:89] found id: ""
	I0314 19:27:39.878612  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.878624  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:39.878630  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:39.878681  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:39.917419  992344 cri.go:89] found id: ""
	I0314 19:27:39.917444  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.917454  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:39.917465  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:39.917480  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:39.976304  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:39.976340  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:39.993786  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:39.993825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:40.074428  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.074458  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:40.074481  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:40.156135  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:40.156177  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:42.700758  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:42.716600  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:42.716672  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:42.763646  992344 cri.go:89] found id: ""
	I0314 19:27:42.763682  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.763694  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:42.763702  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:42.763770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:42.804246  992344 cri.go:89] found id: ""
	I0314 19:27:42.804280  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.804288  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:42.804295  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:42.804360  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:42.847415  992344 cri.go:89] found id: ""
	I0314 19:27:42.847445  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.847455  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:42.847463  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:42.847527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:42.884340  992344 cri.go:89] found id: ""
	I0314 19:27:42.884376  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.884386  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:42.884395  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:42.884464  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:42.923583  992344 cri.go:89] found id: ""
	I0314 19:27:42.923615  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.923634  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:42.923642  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:42.923704  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:42.969164  992344 cri.go:89] found id: ""
	I0314 19:27:42.969195  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.969207  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:42.969215  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:42.969291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:43.013760  992344 cri.go:89] found id: ""
	I0314 19:27:43.013793  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.013802  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:43.013808  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:43.013881  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:43.056930  992344 cri.go:89] found id: ""
	I0314 19:27:43.056964  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.056976  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:43.056989  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:43.057004  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:43.145067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:43.145104  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:43.196679  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:43.196714  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:43.252329  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:43.252363  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:43.268635  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:43.268663  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:43.353391  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.284010  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:42.784684  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.447800  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.943282  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.409130  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.908067  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.853793  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:45.867904  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:45.867971  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:45.909352  992344 cri.go:89] found id: ""
	I0314 19:27:45.909376  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.909387  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:45.909394  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:45.909451  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:45.950885  992344 cri.go:89] found id: ""
	I0314 19:27:45.950920  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.950931  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:45.950939  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:45.951006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:45.987907  992344 cri.go:89] found id: ""
	I0314 19:27:45.987940  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.987951  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:45.987959  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:45.988025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:46.026894  992344 cri.go:89] found id: ""
	I0314 19:27:46.026930  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.026942  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:46.026950  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:46.027047  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:46.074867  992344 cri.go:89] found id: ""
	I0314 19:27:46.074901  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.074911  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:46.074918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:46.074981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:46.111516  992344 cri.go:89] found id: ""
	I0314 19:27:46.111551  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.111562  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:46.111570  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:46.111633  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:46.151560  992344 cri.go:89] found id: ""
	I0314 19:27:46.151590  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.151601  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:46.151610  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:46.151674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:46.191684  992344 cri.go:89] found id: ""
	I0314 19:27:46.191719  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.191730  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:46.191742  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:46.191757  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:46.245152  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:46.245189  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:46.261705  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:46.261741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:46.342381  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:46.342409  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:46.342424  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:46.437995  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:46.438031  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:45.283412  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:47.782838  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.443371  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.446353  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.406887  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.408726  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.410088  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.981814  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:48.998620  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:48.998689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:49.040608  992344 cri.go:89] found id: ""
	I0314 19:27:49.040643  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.040653  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:49.040659  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:49.040711  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:49.083505  992344 cri.go:89] found id: ""
	I0314 19:27:49.083531  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.083539  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:49.083544  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:49.083606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:49.127355  992344 cri.go:89] found id: ""
	I0314 19:27:49.127383  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.127391  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:49.127399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:49.127472  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:49.165694  992344 cri.go:89] found id: ""
	I0314 19:27:49.165726  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.165738  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:49.165746  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:49.165813  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:49.209407  992344 cri.go:89] found id: ""
	I0314 19:27:49.209440  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.209449  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:49.209455  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:49.209516  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:49.250450  992344 cri.go:89] found id: ""
	I0314 19:27:49.250482  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.250493  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:49.250499  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:49.250560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:49.294041  992344 cri.go:89] found id: ""
	I0314 19:27:49.294070  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.294079  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:49.294085  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:49.294150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:49.333664  992344 cri.go:89] found id: ""
	I0314 19:27:49.333706  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.333719  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:49.333731  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:49.333749  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:49.348323  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:49.348351  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:49.428896  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:49.428917  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:49.428929  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:49.510395  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:49.510431  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:49.553630  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:49.553669  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.105763  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:52.120888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:52.120956  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:52.158143  992344 cri.go:89] found id: ""
	I0314 19:27:52.158174  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.158188  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:52.158196  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:52.158271  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:52.198254  992344 cri.go:89] found id: ""
	I0314 19:27:52.198285  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.198294  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:52.198299  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:52.198372  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:52.237973  992344 cri.go:89] found id: ""
	I0314 19:27:52.238001  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.238009  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:52.238015  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:52.238066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:52.283766  992344 cri.go:89] found id: ""
	I0314 19:27:52.283798  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.283809  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:52.283817  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:52.283889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:52.325861  992344 cri.go:89] found id: ""
	I0314 19:27:52.325896  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.325906  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:52.325914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:52.325983  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:52.367582  992344 cri.go:89] found id: ""
	I0314 19:27:52.367612  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.367622  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:52.367631  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:52.367698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:52.405009  992344 cri.go:89] found id: ""
	I0314 19:27:52.405043  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.405054  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:52.405062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:52.405125  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:52.447560  992344 cri.go:89] found id: ""
	I0314 19:27:52.447584  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.447594  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:52.447605  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:52.447620  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:52.519023  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:52.519048  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:52.519062  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:52.603256  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:52.603297  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:52.650926  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:52.650957  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.708743  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:52.708784  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:50.284259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.286540  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.944152  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.446257  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:54.910578  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.407194  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.225549  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:55.242914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:55.242992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:55.284249  992344 cri.go:89] found id: ""
	I0314 19:27:55.284280  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.284291  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:55.284298  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:55.284362  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:55.333784  992344 cri.go:89] found id: ""
	I0314 19:27:55.333821  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.333833  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:55.333840  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:55.333916  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:55.375444  992344 cri.go:89] found id: ""
	I0314 19:27:55.375498  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.375511  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:55.375519  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:55.375598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:55.416225  992344 cri.go:89] found id: ""
	I0314 19:27:55.416259  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.416269  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:55.416276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:55.416340  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:55.461097  992344 cri.go:89] found id: ""
	I0314 19:27:55.461138  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.461150  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:55.461166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:55.461235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:55.504621  992344 cri.go:89] found id: ""
	I0314 19:27:55.504659  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.504670  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:55.504679  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:55.504755  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:55.545075  992344 cri.go:89] found id: ""
	I0314 19:27:55.545111  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.545123  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:55.545130  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:55.545221  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:55.584137  992344 cri.go:89] found id: ""
	I0314 19:27:55.584197  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.584235  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:55.584252  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:55.584274  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:55.642705  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:55.642741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:55.657487  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:55.657516  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:55.738379  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:55.738414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:55.738432  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:55.827582  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:55.827621  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:58.374265  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:58.389764  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:58.389878  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:58.431760  992344 cri.go:89] found id: ""
	I0314 19:27:58.431798  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.431810  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:58.431818  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:58.431880  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:58.471389  992344 cri.go:89] found id: ""
	I0314 19:27:58.471415  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.471424  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:58.471430  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:58.471478  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:58.508875  992344 cri.go:89] found id: ""
	I0314 19:27:58.508903  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.508910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:58.508916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:58.508965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:58.546016  992344 cri.go:89] found id: ""
	I0314 19:27:58.546042  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.546051  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:58.546057  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:58.546106  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:58.586319  992344 cri.go:89] found id: ""
	I0314 19:27:58.586346  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.586354  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:58.586360  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:58.586414  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:58.625381  992344 cri.go:89] found id: ""
	I0314 19:27:58.625411  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.625423  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:58.625431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:58.625494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:58.663016  992344 cri.go:89] found id: ""
	I0314 19:27:58.663047  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.663059  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:58.663068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:58.663131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:58.703100  992344 cri.go:89] found id: ""
	I0314 19:27:58.703144  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.703159  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:58.703172  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:58.703190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:54.782936  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:56.783549  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.783602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.943515  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:00.443787  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:59.908025  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:01.908119  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.755081  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:58.755116  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:58.770547  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:58.770577  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:58.850354  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:58.850379  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:58.850395  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:58.944115  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:58.944152  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.489937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:01.505233  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:01.505309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:01.544381  992344 cri.go:89] found id: ""
	I0314 19:28:01.544417  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.544429  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:01.544437  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:01.544502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:01.582639  992344 cri.go:89] found id: ""
	I0314 19:28:01.582668  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.582676  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:01.582684  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:01.582745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:01.621926  992344 cri.go:89] found id: ""
	I0314 19:28:01.621957  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.621968  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:01.621976  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:01.622040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:01.659749  992344 cri.go:89] found id: ""
	I0314 19:28:01.659779  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.659791  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:01.659798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:01.659869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:01.696467  992344 cri.go:89] found id: ""
	I0314 19:28:01.696497  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.696505  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:01.696511  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:01.696570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:01.735273  992344 cri.go:89] found id: ""
	I0314 19:28:01.735301  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.735310  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:01.735316  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:01.735381  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:01.777051  992344 cri.go:89] found id: ""
	I0314 19:28:01.777081  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.777090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:01.777096  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:01.777155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:01.820851  992344 cri.go:89] found id: ""
	I0314 19:28:01.820883  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.820894  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:01.820911  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:01.820926  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:01.874599  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:01.874632  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:01.888971  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:01.889007  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:01.971786  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:01.971806  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:01.971819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:02.064070  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:02.064114  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.283565  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.782799  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:02.446196  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.944339  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.917838  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:06.409597  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.610064  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:04.625349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:04.625417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:04.664254  992344 cri.go:89] found id: ""
	I0314 19:28:04.664284  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.664293  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:04.664299  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:04.664348  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:04.704466  992344 cri.go:89] found id: ""
	I0314 19:28:04.704502  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.704514  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:04.704523  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:04.704588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:04.745733  992344 cri.go:89] found id: ""
	I0314 19:28:04.745762  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.745773  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:04.745781  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:04.745846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:04.790435  992344 cri.go:89] found id: ""
	I0314 19:28:04.790465  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.790477  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:04.790485  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:04.790550  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:04.829215  992344 cri.go:89] found id: ""
	I0314 19:28:04.829255  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.829268  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:04.829276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:04.829343  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:04.874200  992344 cri.go:89] found id: ""
	I0314 19:28:04.874234  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.874246  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:04.874253  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:04.874318  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:04.915882  992344 cri.go:89] found id: ""
	I0314 19:28:04.915909  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.915920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:04.915928  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:04.915994  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:04.954000  992344 cri.go:89] found id: ""
	I0314 19:28:04.954027  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.954038  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:04.954049  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:04.954063  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:04.996511  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:04.996540  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:05.049608  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:05.049644  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:05.064401  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:05.064437  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:05.145169  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:05.145189  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:05.145202  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:07.734535  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:07.765003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:07.765099  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:07.814489  992344 cri.go:89] found id: ""
	I0314 19:28:07.814518  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.814526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:07.814532  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:07.814595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:07.854337  992344 cri.go:89] found id: ""
	I0314 19:28:07.854368  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.854378  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:07.854384  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:07.854455  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:07.894430  992344 cri.go:89] found id: ""
	I0314 19:28:07.894465  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.894479  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:07.894487  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:07.894551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:07.939473  992344 cri.go:89] found id: ""
	I0314 19:28:07.939504  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.939515  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:07.939524  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:07.939591  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:07.982584  992344 cri.go:89] found id: ""
	I0314 19:28:07.982627  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.982640  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:07.982649  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:07.982710  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:08.020038  992344 cri.go:89] found id: ""
	I0314 19:28:08.020065  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.020074  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:08.020080  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:08.020138  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:08.058377  992344 cri.go:89] found id: ""
	I0314 19:28:08.058412  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.058423  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:08.058431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:08.058509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:08.096241  992344 cri.go:89] found id: ""
	I0314 19:28:08.096273  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.096284  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:08.096294  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:08.096308  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:08.174276  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:08.174315  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:08.221249  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:08.221282  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:08.273899  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:08.273930  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:08.290166  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:08.290193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:08.382154  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:06.283073  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.784385  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:07.447546  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:09.448168  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.906422  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.907030  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.882385  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:10.898126  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:10.898200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:10.939972  992344 cri.go:89] found id: ""
	I0314 19:28:10.940001  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.940012  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:10.940019  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:10.940084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:10.985154  992344 cri.go:89] found id: ""
	I0314 19:28:10.985187  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.985199  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:10.985212  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:10.985278  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:11.023955  992344 cri.go:89] found id: ""
	I0314 19:28:11.024004  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.024017  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:11.024025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:11.024094  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:11.065508  992344 cri.go:89] found id: ""
	I0314 19:28:11.065534  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.065543  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:11.065549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:11.065620  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:11.103903  992344 cri.go:89] found id: ""
	I0314 19:28:11.103930  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.103938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:11.103944  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:11.103997  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:11.146820  992344 cri.go:89] found id: ""
	I0314 19:28:11.146856  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.146866  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:11.146873  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:11.146930  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:11.195840  992344 cri.go:89] found id: ""
	I0314 19:28:11.195871  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.195880  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:11.195888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:11.195957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:11.237594  992344 cri.go:89] found id: ""
	I0314 19:28:11.237628  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.237647  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:11.237658  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:11.237671  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:11.297323  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:11.297356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:11.313785  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:11.313815  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:11.393416  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:11.393444  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:11.393461  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:11.472938  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:11.472972  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:11.283364  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.283657  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:11.945477  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.443000  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.406341  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:15.905918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.907047  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.025870  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:14.039597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:14.039667  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:14.076786  992344 cri.go:89] found id: ""
	I0314 19:28:14.076822  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.076834  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:14.076842  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:14.076911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:14.114754  992344 cri.go:89] found id: ""
	I0314 19:28:14.114796  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.114815  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:14.114823  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:14.114893  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:14.158360  992344 cri.go:89] found id: ""
	I0314 19:28:14.158396  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.158408  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:14.158417  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:14.158489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:14.208587  992344 cri.go:89] found id: ""
	I0314 19:28:14.208626  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.208638  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:14.208646  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:14.208712  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:14.253013  992344 cri.go:89] found id: ""
	I0314 19:28:14.253049  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.253062  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:14.253071  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:14.253142  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:14.313793  992344 cri.go:89] found id: ""
	I0314 19:28:14.313830  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.313843  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:14.313851  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:14.313918  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:14.352044  992344 cri.go:89] found id: ""
	I0314 19:28:14.352076  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.352087  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:14.352094  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:14.352161  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:14.389393  992344 cri.go:89] found id: ""
	I0314 19:28:14.389427  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.389436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:14.389446  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:14.389464  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:14.447873  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:14.447914  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:14.462610  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:14.462636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:14.543393  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:14.543414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:14.543427  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:14.628147  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:14.628190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.177617  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:17.193408  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:17.193481  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:17.233133  992344 cri.go:89] found id: ""
	I0314 19:28:17.233161  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.233170  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:17.233183  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:17.233252  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:17.270429  992344 cri.go:89] found id: ""
	I0314 19:28:17.270459  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.270471  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:17.270479  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:17.270559  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:17.309915  992344 cri.go:89] found id: ""
	I0314 19:28:17.309939  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.309947  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:17.309952  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:17.309999  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:17.347157  992344 cri.go:89] found id: ""
	I0314 19:28:17.347188  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.347199  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:17.347206  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:17.347269  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:17.388837  992344 cri.go:89] found id: ""
	I0314 19:28:17.388866  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.388877  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:17.388884  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:17.388948  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:17.425945  992344 cri.go:89] found id: ""
	I0314 19:28:17.425969  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.425977  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:17.425983  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:17.426051  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:17.470291  992344 cri.go:89] found id: ""
	I0314 19:28:17.470320  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.470356  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:17.470365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:17.470424  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:17.507512  992344 cri.go:89] found id: ""
	I0314 19:28:17.507541  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.507549  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:17.507559  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:17.507575  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.550148  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:17.550186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:17.603728  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:17.603759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:17.619160  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:17.619186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:17.699649  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:17.699683  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:17.699701  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:15.782780  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.783204  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:16.942941  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:18.943420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:21.450329  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.407873  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.905658  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.284486  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:20.300132  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:20.300198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:20.341566  992344 cri.go:89] found id: ""
	I0314 19:28:20.341608  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.341620  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:20.341629  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:20.341700  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:20.379527  992344 cri.go:89] found id: ""
	I0314 19:28:20.379555  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.379562  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:20.379568  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:20.379640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:20.425871  992344 cri.go:89] found id: ""
	I0314 19:28:20.425902  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.425910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:20.425916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:20.425980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:20.464939  992344 cri.go:89] found id: ""
	I0314 19:28:20.464979  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.464993  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:20.465003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:20.465075  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:20.500954  992344 cri.go:89] found id: ""
	I0314 19:28:20.500982  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.500993  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:20.501001  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:20.501063  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:20.542049  992344 cri.go:89] found id: ""
	I0314 19:28:20.542080  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.542090  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:20.542098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:20.542178  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:20.577298  992344 cri.go:89] found id: ""
	I0314 19:28:20.577325  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.577333  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:20.577340  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:20.577389  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.618467  992344 cri.go:89] found id: ""
	I0314 19:28:20.618498  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.618511  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:20.618523  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:20.618537  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:20.694238  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:20.694280  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:20.694298  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:20.778845  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:20.778882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:20.821575  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:20.821606  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:20.876025  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:20.876061  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.391129  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:23.408183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:23.408276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:23.449128  992344 cri.go:89] found id: ""
	I0314 19:28:23.449169  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.449180  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:23.449186  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:23.449276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:23.486168  992344 cri.go:89] found id: ""
	I0314 19:28:23.486201  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.486223  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:23.486242  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:23.486299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:23.525452  992344 cri.go:89] found id: ""
	I0314 19:28:23.525484  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.525492  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:23.525498  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:23.525553  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:23.560947  992344 cri.go:89] found id: ""
	I0314 19:28:23.560982  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.561037  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:23.561054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:23.561121  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:23.607261  992344 cri.go:89] found id: ""
	I0314 19:28:23.607298  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.607310  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:23.607317  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:23.607392  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:23.646849  992344 cri.go:89] found id: ""
	I0314 19:28:23.646881  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.646891  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:23.646896  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:23.646962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:23.684108  992344 cri.go:89] found id: ""
	I0314 19:28:23.684133  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.684140  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:23.684146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:23.684197  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.283546  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.783059  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.942845  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:25.943049  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:24.905817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.908404  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.723284  992344 cri.go:89] found id: ""
	I0314 19:28:23.723320  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.723331  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:23.723343  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:23.723359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:23.785024  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:23.785066  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.801136  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:23.801167  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:23.875721  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:23.875749  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:23.875766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:23.969377  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:23.969420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:26.517771  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:26.533260  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:26.533349  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:26.573712  992344 cri.go:89] found id: ""
	I0314 19:28:26.573750  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.573762  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:26.573770  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:26.573846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:26.610738  992344 cri.go:89] found id: ""
	I0314 19:28:26.610768  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.610777  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:26.610783  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:26.610836  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:26.652014  992344 cri.go:89] found id: ""
	I0314 19:28:26.652041  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.652049  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:26.652054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:26.652109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:26.687344  992344 cri.go:89] found id: ""
	I0314 19:28:26.687377  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.687389  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:26.687398  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:26.687466  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:26.725897  992344 cri.go:89] found id: ""
	I0314 19:28:26.725926  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.725938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:26.725945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:26.726008  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:26.772328  992344 cri.go:89] found id: ""
	I0314 19:28:26.772357  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.772367  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:26.772375  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:26.772440  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:26.814721  992344 cri.go:89] found id: ""
	I0314 19:28:26.814757  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.814768  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:26.814776  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:26.814841  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:26.849726  992344 cri.go:89] found id: ""
	I0314 19:28:26.849763  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.849781  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:26.849794  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:26.849811  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:26.932680  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:26.932709  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:26.932725  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:27.011721  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:27.011787  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:27.059121  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:27.059160  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:27.110392  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:27.110430  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:24.783160  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.783703  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:27.943562  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.943880  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.405973  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.407122  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.625784  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:29.642945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:29.643024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:29.681233  992344 cri.go:89] found id: ""
	I0314 19:28:29.681267  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.681279  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:29.681286  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:29.681351  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:29.729735  992344 cri.go:89] found id: ""
	I0314 19:28:29.729764  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.729773  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:29.729779  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:29.729835  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:29.773873  992344 cri.go:89] found id: ""
	I0314 19:28:29.773902  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.773911  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:29.773918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:29.773973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:29.815982  992344 cri.go:89] found id: ""
	I0314 19:28:29.816009  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.816019  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:29.816025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:29.816102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:29.855295  992344 cri.go:89] found id: ""
	I0314 19:28:29.855328  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.855343  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:29.855349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:29.855404  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:29.893580  992344 cri.go:89] found id: ""
	I0314 19:28:29.893618  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.893630  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:29.893638  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:29.893705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:29.939721  992344 cri.go:89] found id: ""
	I0314 19:28:29.939752  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.939763  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:29.939770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:29.939837  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:29.978277  992344 cri.go:89] found id: ""
	I0314 19:28:29.978315  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.978328  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:29.978347  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:29.978362  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:30.031723  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:30.031761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:30.046940  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:30.046968  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:30.124190  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:30.124226  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:30.124244  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:30.203448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:30.203488  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:32.756750  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:32.772599  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:32.772679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:32.812033  992344 cri.go:89] found id: ""
	I0314 19:28:32.812061  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.812069  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:32.812076  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:32.812165  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:32.855461  992344 cri.go:89] found id: ""
	I0314 19:28:32.855490  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.855501  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:32.855509  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:32.855575  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:32.900644  992344 cri.go:89] found id: ""
	I0314 19:28:32.900675  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.900686  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:32.900694  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:32.900772  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:32.942120  992344 cri.go:89] found id: ""
	I0314 19:28:32.942155  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.942166  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:32.942175  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:32.942238  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:32.981325  992344 cri.go:89] found id: ""
	I0314 19:28:32.981352  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.981360  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:32.981367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:32.981419  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:33.019732  992344 cri.go:89] found id: ""
	I0314 19:28:33.019767  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.019781  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:33.019789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:33.019852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:33.060205  992344 cri.go:89] found id: ""
	I0314 19:28:33.060262  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.060274  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:33.060283  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:33.060350  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:33.100456  992344 cri.go:89] found id: ""
	I0314 19:28:33.100490  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.100517  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:33.100529  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:33.100548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:33.114637  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:33.114668  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:33.186983  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:33.187010  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:33.187024  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:33.268816  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:33.268856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:33.314600  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:33.314634  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:29.282840  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.783516  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:32.443948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:34.942835  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:33.906912  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.908364  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.870832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:35.886088  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:35.886168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:35.929548  992344 cri.go:89] found id: ""
	I0314 19:28:35.929580  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.929590  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:35.929598  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:35.929675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:35.970315  992344 cri.go:89] found id: ""
	I0314 19:28:35.970351  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.970364  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:35.970372  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:35.970438  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:36.010663  992344 cri.go:89] found id: ""
	I0314 19:28:36.010696  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.010716  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:36.010723  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:36.010806  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:36.055521  992344 cri.go:89] found id: ""
	I0314 19:28:36.055558  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.055569  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:36.055578  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:36.055648  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:36.095768  992344 cri.go:89] found id: ""
	I0314 19:28:36.095799  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.095810  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:36.095821  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:36.095875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:36.132820  992344 cri.go:89] found id: ""
	I0314 19:28:36.132848  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.132856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:36.132861  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:36.132915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:36.173162  992344 cri.go:89] found id: ""
	I0314 19:28:36.173196  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.173209  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:36.173217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:36.173287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:36.211796  992344 cri.go:89] found id: ""
	I0314 19:28:36.211822  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.211830  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:36.211839  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:36.211854  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:36.271494  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:36.271536  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:36.289341  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:36.289366  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:36.368331  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:36.368361  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:36.368378  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:36.448945  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:36.448993  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:34.283005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.286755  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.781678  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.943412  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:39.450650  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.407910  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:40.409015  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:42.906420  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.995675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:39.009626  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:39.009705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:39.051085  992344 cri.go:89] found id: ""
	I0314 19:28:39.051119  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.051128  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:39.051134  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:39.051184  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:39.090167  992344 cri.go:89] found id: ""
	I0314 19:28:39.090201  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.090214  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:39.090221  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:39.090293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:39.129345  992344 cri.go:89] found id: ""
	I0314 19:28:39.129388  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.129404  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:39.129411  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:39.129475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:39.166678  992344 cri.go:89] found id: ""
	I0314 19:28:39.166731  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.166741  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:39.166750  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:39.166822  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:39.206329  992344 cri.go:89] found id: ""
	I0314 19:28:39.206368  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.206381  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:39.206389  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:39.206442  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:39.245158  992344 cri.go:89] found id: ""
	I0314 19:28:39.245187  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.245196  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:39.245202  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:39.245253  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:39.289207  992344 cri.go:89] found id: ""
	I0314 19:28:39.289243  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.289259  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:39.289267  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:39.289335  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:39.327437  992344 cri.go:89] found id: ""
	I0314 19:28:39.327462  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.327472  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:39.327484  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:39.327500  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:39.381681  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:39.381724  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:39.397060  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:39.397097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:39.482718  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:39.482744  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:39.482761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:39.566304  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:39.566349  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.111937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:42.126968  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:42.127033  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:42.168671  992344 cri.go:89] found id: ""
	I0314 19:28:42.168701  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.168713  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:42.168721  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:42.168792  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:42.213285  992344 cri.go:89] found id: ""
	I0314 19:28:42.213311  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.213319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:42.213325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:42.213388  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:42.255036  992344 cri.go:89] found id: ""
	I0314 19:28:42.255075  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.255085  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:42.255090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:42.255159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:42.296863  992344 cri.go:89] found id: ""
	I0314 19:28:42.296896  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.296907  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:42.296915  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:42.296978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:42.338346  992344 cri.go:89] found id: ""
	I0314 19:28:42.338402  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.338413  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:42.338421  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:42.338489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:42.374667  992344 cri.go:89] found id: ""
	I0314 19:28:42.374691  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.374699  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:42.374711  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:42.374774  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:42.412676  992344 cri.go:89] found id: ""
	I0314 19:28:42.412702  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.412713  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:42.412721  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:42.412786  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:42.451093  992344 cri.go:89] found id: ""
	I0314 19:28:42.451125  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.451135  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:42.451147  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:42.451162  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:42.531130  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:42.531176  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.576583  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:42.576623  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:42.633675  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:42.633715  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:42.650154  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:42.650188  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:42.731282  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:41.282876  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.283770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:41.942349  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.942831  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.943723  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:44.907134  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:46.907817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.231813  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:45.246939  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:45.247029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:45.289033  992344 cri.go:89] found id: ""
	I0314 19:28:45.289057  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.289066  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:45.289071  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:45.289128  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:45.327007  992344 cri.go:89] found id: ""
	I0314 19:28:45.327034  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.327043  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:45.327048  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:45.327109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:45.363725  992344 cri.go:89] found id: ""
	I0314 19:28:45.363757  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.363770  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:45.363778  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:45.363833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:45.400775  992344 cri.go:89] found id: ""
	I0314 19:28:45.400808  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.400819  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:45.400826  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:45.400887  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:45.438717  992344 cri.go:89] found id: ""
	I0314 19:28:45.438750  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.438762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:45.438770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:45.438833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:45.483296  992344 cri.go:89] found id: ""
	I0314 19:28:45.483334  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.483349  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:45.483355  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:45.483406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:45.519840  992344 cri.go:89] found id: ""
	I0314 19:28:45.519872  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.519881  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:45.519887  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:45.519939  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:45.560535  992344 cri.go:89] found id: ""
	I0314 19:28:45.560565  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.560577  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:45.560590  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:45.560613  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:45.639453  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:45.639476  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:45.639489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:45.724224  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:45.724265  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:45.768456  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:45.768494  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:45.828111  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:45.828154  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.345352  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:48.358823  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:48.358879  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:48.401545  992344 cri.go:89] found id: ""
	I0314 19:28:48.401575  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.401586  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:48.401595  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:48.401655  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:48.442031  992344 cri.go:89] found id: ""
	I0314 19:28:48.442062  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.442073  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:48.442081  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:48.442186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:48.481192  992344 cri.go:89] found id: ""
	I0314 19:28:48.481230  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.481239  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:48.481245  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:48.481309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:48.522127  992344 cri.go:89] found id: ""
	I0314 19:28:48.522162  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.522171  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:48.522177  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:48.522233  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:48.562763  992344 cri.go:89] found id: ""
	I0314 19:28:48.562791  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.562800  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:48.562806  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:48.562866  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:48.606256  992344 cri.go:89] found id: ""
	I0314 19:28:48.606290  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.606300  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:48.606309  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:48.606376  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:48.645493  992344 cri.go:89] found id: ""
	I0314 19:28:48.645527  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.645539  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:48.645547  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:48.645634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:48.686145  992344 cri.go:89] found id: ""
	I0314 19:28:48.686177  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.686189  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:48.686202  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:48.686229  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.701771  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:48.701812  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:28:45.784389  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.283921  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.443564  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.445062  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.909434  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.910456  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:28:48.783905  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:48.783931  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:48.783947  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:48.863824  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:48.863868  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:48.919421  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:48.919456  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:51.491562  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:51.507427  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:51.507494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:51.549290  992344 cri.go:89] found id: ""
	I0314 19:28:51.549325  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.549337  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:51.549344  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:51.549415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:51.587540  992344 cri.go:89] found id: ""
	I0314 19:28:51.587575  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.587588  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:51.587595  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:51.587663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:51.629187  992344 cri.go:89] found id: ""
	I0314 19:28:51.629221  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.629229  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:51.629235  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:51.629299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:51.670884  992344 cri.go:89] found id: ""
	I0314 19:28:51.670913  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.670921  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:51.670927  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:51.670978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:51.712751  992344 cri.go:89] found id: ""
	I0314 19:28:51.712783  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.712794  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:51.712802  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:51.712873  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:51.751462  992344 cri.go:89] found id: ""
	I0314 19:28:51.751490  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.751499  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:51.751505  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:51.751572  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:51.793049  992344 cri.go:89] found id: ""
	I0314 19:28:51.793079  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.793090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:51.793098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:51.793166  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:51.834793  992344 cri.go:89] found id: ""
	I0314 19:28:51.834825  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.834837  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:51.834850  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:51.834871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:51.851743  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:51.851792  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:51.927748  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:51.927768  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:51.927780  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:52.011674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:52.011718  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:52.067015  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:52.067059  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:50.783127  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.783450  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.942964  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:54.945540  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:53.407301  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:55.907357  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:56.900342  992056 pod_ready.go:81] duration metric: took 4m0.000959023s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	E0314 19:28:56.900373  992056 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:28:56.900392  992056 pod_ready.go:38] duration metric: took 4m15.050031566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:28:56.900431  992056 kubeadm.go:591] duration metric: took 4m22.457881244s to restartPrimaryControlPlane
	W0314 19:28:56.900513  992056 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:28:56.900549  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:28:54.623820  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:54.641380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:54.641459  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:54.699381  992344 cri.go:89] found id: ""
	I0314 19:28:54.699418  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.699430  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:54.699439  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:54.699507  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:54.752793  992344 cri.go:89] found id: ""
	I0314 19:28:54.752843  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.752865  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:54.752873  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:54.752980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:54.805116  992344 cri.go:89] found id: ""
	I0314 19:28:54.805148  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.805158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:54.805166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:54.805231  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:54.843303  992344 cri.go:89] found id: ""
	I0314 19:28:54.843336  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.843346  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:54.843352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:54.843406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:54.879789  992344 cri.go:89] found id: ""
	I0314 19:28:54.879822  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.879834  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:54.879840  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:54.879911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:54.921874  992344 cri.go:89] found id: ""
	I0314 19:28:54.921903  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.921913  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:54.921921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:54.922005  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:54.966098  992344 cri.go:89] found id: ""
	I0314 19:28:54.966129  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.966137  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:54.966146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:54.966201  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:55.005963  992344 cri.go:89] found id: ""
	I0314 19:28:55.005995  992344 logs.go:276] 0 containers: []
	W0314 19:28:55.006006  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:55.006019  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:55.006035  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:55.063802  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:55.063838  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:55.079126  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:55.079157  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:55.156174  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:55.156200  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:55.156241  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:55.237471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:55.237517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:57.786574  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:57.804359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:57.804446  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:57.843520  992344 cri.go:89] found id: ""
	I0314 19:28:57.843554  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.843566  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:57.843574  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:57.843642  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:57.883350  992344 cri.go:89] found id: ""
	I0314 19:28:57.883385  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.883398  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:57.883408  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:57.883502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:57.926544  992344 cri.go:89] found id: ""
	I0314 19:28:57.926578  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.926589  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:57.926597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:57.926674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:57.969832  992344 cri.go:89] found id: ""
	I0314 19:28:57.969861  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.969873  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:57.969880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:57.969951  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:58.021915  992344 cri.go:89] found id: ""
	I0314 19:28:58.021952  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.021964  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:58.021972  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:58.022043  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:58.068004  992344 cri.go:89] found id: ""
	I0314 19:28:58.068045  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.068059  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:58.068067  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:58.068147  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:58.109350  992344 cri.go:89] found id: ""
	I0314 19:28:58.109385  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.109397  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:58.109405  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:58.109474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:58.149505  992344 cri.go:89] found id: ""
	I0314 19:28:58.149600  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.149617  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:58.149631  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:58.149648  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:58.165051  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:58.165097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:58.260306  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:58.260334  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:58.260360  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:58.347229  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:58.347270  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:58.394506  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:58.394546  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:54.783620  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.282809  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.444954  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:59.450968  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.452967  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:00.965332  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:00.982169  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:29:00.982254  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:29:01.023125  992344 cri.go:89] found id: ""
	I0314 19:29:01.023161  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.023174  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:29:01.023182  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:29:01.023258  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:29:01.073622  992344 cri.go:89] found id: ""
	I0314 19:29:01.073663  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.073688  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:29:01.073697  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:29:01.073762  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:29:01.128431  992344 cri.go:89] found id: ""
	I0314 19:29:01.128459  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.128468  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:29:01.128474  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:29:01.128538  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:29:01.175167  992344 cri.go:89] found id: ""
	I0314 19:29:01.175196  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.175214  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:29:01.175222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:29:01.175287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:29:01.219999  992344 cri.go:89] found id: ""
	I0314 19:29:01.220030  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.220041  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:29:01.220049  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:29:01.220114  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:29:01.267917  992344 cri.go:89] found id: ""
	I0314 19:29:01.267946  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.267954  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:29:01.267961  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:29:01.268010  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:29:01.308402  992344 cri.go:89] found id: ""
	I0314 19:29:01.308437  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.308450  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:29:01.308457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:29:01.308527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:29:01.354953  992344 cri.go:89] found id: ""
	I0314 19:29:01.354982  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.354991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:29:01.355001  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:29:01.355016  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:29:01.409088  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:29:01.409131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:29:01.424936  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:29:01.424965  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:29:01.517636  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:29:01.517673  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:29:01.517691  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:29:01.632674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:29:01.632731  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:59.284185  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.783757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:03.943195  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:05.943902  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:04.185418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:04.199946  992344 kubeadm.go:591] duration metric: took 4m3.891459486s to restartPrimaryControlPlane
	W0314 19:29:04.200023  992344 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:04.200050  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:05.838695  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.638615727s)
	I0314 19:29:05.838799  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:05.858457  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:05.870547  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:05.881784  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:05.881805  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:05.881853  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:05.892847  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:05.892892  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:05.904430  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:05.914971  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:05.915037  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:05.925984  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.935559  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:05.935615  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.947405  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:05.958132  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:05.958177  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:05.968975  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:06.219425  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:04.283772  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:06.785802  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:07.950776  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:10.445766  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:09.282584  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:11.783655  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:12.942948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.944204  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.282089  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:16.282920  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:18.283071  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:17.446142  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:19.447118  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:21.447921  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:20.284298  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:22.782760  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:23.452826  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:25.944013  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:24.785109  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:27.282770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:28.443907  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:30.447194  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:29.271454  992056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.370871229s)
	I0314 19:29:29.271543  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:29.288947  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:29.299822  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:29.309955  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:29.309972  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:29.310004  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:29.320229  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:29.320285  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:29.331509  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:29.342985  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:29.343046  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:29.352805  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.363317  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:29.363376  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.374226  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:29.384400  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:29.384444  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:29.394962  992056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:29.631020  992056 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:29.283297  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:31.782029  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:33.783415  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:32.447974  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:34.943668  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:35.786587  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.282404  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.891396  992056 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:29:38.891457  992056 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:29:38.891550  992056 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:29:38.891703  992056 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:29:38.891857  992056 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:29:38.891965  992056 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:29:38.893298  992056 out.go:204]   - Generating certificates and keys ...
	I0314 19:29:38.893420  992056 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:29:38.893526  992056 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:29:38.893637  992056 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:29:38.893727  992056 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:29:38.893833  992056 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:29:38.893931  992056 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:29:38.894042  992056 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:29:38.894147  992056 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:29:38.894249  992056 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:29:38.894351  992056 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:29:38.894413  992056 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:29:38.894483  992056 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:29:38.894564  992056 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:29:38.894648  992056 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:29:38.894740  992056 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:29:38.894825  992056 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:29:38.894942  992056 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:29:38.895027  992056 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:29:38.896425  992056 out.go:204]   - Booting up control plane ...
	I0314 19:29:38.896545  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:29:38.896665  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:29:38.896773  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:29:38.896879  992056 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:29:38.896980  992056 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:29:38.897045  992056 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:29:38.897200  992056 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:29:38.897278  992056 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.504738 seconds
	I0314 19:29:38.897390  992056 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:29:38.897574  992056 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:29:38.897680  992056 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:29:38.897920  992056 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-992669 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:29:38.897993  992056 kubeadm.go:309] [bootstrap-token] Using token: wr0inu.l2vxagywmdawjzpm
	I0314 19:29:38.899387  992056 out.go:204]   - Configuring RBAC rules ...
	I0314 19:29:38.899518  992056 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:29:38.899597  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:29:38.899790  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:29:38.899950  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:29:38.900097  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:29:38.900225  992056 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:29:38.900389  992056 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:29:38.900449  992056 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:29:38.900514  992056 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:29:38.900523  992056 kubeadm.go:309] 
	I0314 19:29:38.900615  992056 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:29:38.900638  992056 kubeadm.go:309] 
	I0314 19:29:38.900743  992056 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:29:38.900753  992056 kubeadm.go:309] 
	I0314 19:29:38.900788  992056 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:29:38.900872  992056 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:29:38.900945  992056 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:29:38.900954  992056 kubeadm.go:309] 
	I0314 19:29:38.901031  992056 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:29:38.901042  992056 kubeadm.go:309] 
	I0314 19:29:38.901111  992056 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:29:38.901124  992056 kubeadm.go:309] 
	I0314 19:29:38.901202  992056 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:29:38.901312  992056 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:29:38.901433  992056 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:29:38.901446  992056 kubeadm.go:309] 
	I0314 19:29:38.901523  992056 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:29:38.901614  992056 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:29:38.901624  992056 kubeadm.go:309] 
	I0314 19:29:38.901743  992056 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.901842  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:29:38.901862  992056 kubeadm.go:309] 	--control-plane 
	I0314 19:29:38.901865  992056 kubeadm.go:309] 
	I0314 19:29:38.901933  992056 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:29:38.901943  992056 kubeadm.go:309] 
	I0314 19:29:38.902025  992056 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.902185  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:29:38.902212  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:29:38.902222  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:29:38.903643  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:29:36.944055  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.945642  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:39.437026  992563 pod_ready.go:81] duration metric: took 4m0.000967236s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	E0314 19:29:39.437057  992563 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:29:39.437072  992563 pod_ready.go:38] duration metric: took 4m7.55729252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:39.437098  992563 kubeadm.go:591] duration metric: took 4m15.521374831s to restartPrimaryControlPlane
	W0314 19:29:39.437168  992563 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:39.437200  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:38.904945  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:29:38.921860  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:29:38.958963  992056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:29:38.959064  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:38.959065  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-992669 minikube.k8s.io/updated_at=2024_03_14T19_29_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=embed-certs-992669 minikube.k8s.io/primary=true
	I0314 19:29:39.310627  992056 ops.go:34] apiserver oom_adj: -16
	I0314 19:29:39.310807  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:39.811730  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.311090  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.811674  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.311488  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.811640  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.310976  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.811336  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.283716  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:42.784841  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:43.311472  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:43.811668  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.311072  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.811108  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.311743  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.811197  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.311720  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.810955  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.311810  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.811633  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.282898  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:47.786855  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:48.310845  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:48.811747  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.310862  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.811100  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.311383  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.811660  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.311496  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.565143  992056 kubeadm.go:1106] duration metric: took 12.606155275s to wait for elevateKubeSystemPrivileges
	W0314 19:29:51.565200  992056 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:29:51.565210  992056 kubeadm.go:393] duration metric: took 5m17.173193727s to StartCluster
	I0314 19:29:51.565243  992056 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.565344  992056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:29:51.567430  992056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.567800  992056 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:29:51.570366  992056 out.go:177] * Verifying Kubernetes components...
	I0314 19:29:51.567870  992056 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:29:51.568004  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:29:51.571834  992056 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-992669"
	I0314 19:29:51.571847  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:29:51.571872  992056 addons.go:69] Setting metrics-server=true in profile "embed-certs-992669"
	I0314 19:29:51.571922  992056 addons.go:234] Setting addon metrics-server=true in "embed-certs-992669"
	W0314 19:29:51.571942  992056 addons.go:243] addon metrics-server should already be in state true
	I0314 19:29:51.571981  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571884  992056 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-992669"
	W0314 19:29:51.572025  992056 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:29:51.572056  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571842  992056 addons.go:69] Setting default-storageclass=true in profile "embed-certs-992669"
	I0314 19:29:51.572143  992056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-992669"
	I0314 19:29:51.572563  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572578  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572597  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572567  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572611  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572665  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.595116  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0314 19:29:51.595142  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0314 19:29:51.595156  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
	I0314 19:29:51.595736  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595837  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595892  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.596363  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596382  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596560  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596545  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596788  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.596895  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597022  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597213  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.597463  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597488  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597536  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.597498  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.601587  992056 addons.go:234] Setting addon default-storageclass=true in "embed-certs-992669"
	W0314 19:29:51.601612  992056 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:29:51.601644  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.602034  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.602069  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.613696  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I0314 19:29:51.614277  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.614924  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.614957  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.615340  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.615518  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.616192  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0314 19:29:51.616643  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.617453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.617661  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.617680  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.619738  992056 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:29:51.618228  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.621267  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:29:51.621284  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:29:51.621299  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.619984  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.622057  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I0314 19:29:51.622533  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.623169  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.623184  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.623511  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.623600  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.625179  992056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:29:51.625183  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.627022  992056 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:51.627052  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:29:51.627074  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.624457  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.627169  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.625872  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.627276  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.626052  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.627505  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.628272  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.628593  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.630213  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630764  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.630788  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630870  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.631065  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.631483  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.631681  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.645022  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0314 19:29:51.645562  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.646147  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.646172  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.646551  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.646766  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.648424  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.648674  992056 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:51.648690  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:29:51.648702  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.651513  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652188  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.652197  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.652220  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652395  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.652552  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.652655  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.845568  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:29:51.865551  992056 node_ready.go:35] waiting up to 6m0s for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875093  992056 node_ready.go:49] node "embed-certs-992669" has status "Ready":"True"
	I0314 19:29:51.875111  992056 node_ready.go:38] duration metric: took 9.53464ms for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875123  992056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:51.883535  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:51.979907  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:52.034281  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:29:52.034312  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:29:52.060831  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:52.124847  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:29:52.124885  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:29:52.289209  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:52.289239  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:29:52.374833  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:50.286539  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:52.298408  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:53.393013  992056 pod_ready.go:92] pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.393048  992056 pod_ready.go:81] duration metric: took 1.509482935s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.393060  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401449  992056 pod_ready.go:92] pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.401476  992056 pod_ready.go:81] duration metric: took 8.407286ms for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401486  992056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406465  992056 pod_ready.go:92] pod "etcd-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.406492  992056 pod_ready.go:81] duration metric: took 4.997468ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406502  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412923  992056 pod_ready.go:92] pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.412954  992056 pod_ready.go:81] duration metric: took 6.441869ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412966  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469519  992056 pod_ready.go:92] pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.469552  992056 pod_ready.go:81] duration metric: took 56.57628ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469566  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.582001  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.602041099s)
	I0314 19:29:53.582078  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582096  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.582484  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582500  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582521  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582795  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582813  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582853  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.590184  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.590202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.590451  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.590487  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.886717  992056 pod_ready.go:92] pod "kube-proxy-hzhsp" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.886741  992056 pod_ready.go:81] duration metric: took 417.167569ms for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.886751  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.965815  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.904943117s)
	I0314 19:29:53.965875  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.965887  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.966214  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.966240  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.966239  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.966249  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.966305  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.967958  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.968169  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.968187  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.996956  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.622074464s)
	I0314 19:29:53.997019  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997033  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997356  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.997378  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997400  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997415  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997427  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997740  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997758  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997771  992056 addons.go:470] Verifying addon metrics-server=true in "embed-certs-992669"
	I0314 19:29:53.999390  992056 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:29:54.000743  992056 addons.go:505] duration metric: took 2.432877042s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:29:54.270407  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:54.270432  992056 pod_ready.go:81] duration metric: took 383.674695ms for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:54.270440  992056 pod_ready.go:38] duration metric: took 2.395303637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:54.270455  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:29:54.270521  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:54.293083  992056 api_server.go:72] duration metric: took 2.725234796s to wait for apiserver process to appear ...
	I0314 19:29:54.293113  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:29:54.293164  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:29:54.302466  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:29:54.304317  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:29:54.304342  992056 api_server.go:131] duration metric: took 11.220873ms to wait for apiserver health ...
	I0314 19:29:54.304353  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:29:54.479241  992056 system_pods.go:59] 9 kube-system pods found
	I0314 19:29:54.479276  992056 system_pods.go:61] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.479282  992056 system_pods.go:61] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.479288  992056 system_pods.go:61] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.479294  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.479299  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.479305  992056 system_pods.go:61] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.479310  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.479318  992056 system_pods.go:61] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.479325  992056 system_pods.go:61] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.479340  992056 system_pods.go:74] duration metric: took 174.978725ms to wait for pod list to return data ...
	I0314 19:29:54.479358  992056 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:29:54.668682  992056 default_sa.go:45] found service account: "default"
	I0314 19:29:54.668714  992056 default_sa.go:55] duration metric: took 189.346747ms for default service account to be created ...
	I0314 19:29:54.668727  992056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:29:54.873128  992056 system_pods.go:86] 9 kube-system pods found
	I0314 19:29:54.873161  992056 system_pods.go:89] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.873169  992056 system_pods.go:89] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.873175  992056 system_pods.go:89] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.873184  992056 system_pods.go:89] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.873189  992056 system_pods.go:89] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.873194  992056 system_pods.go:89] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.873199  992056 system_pods.go:89] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.873211  992056 system_pods.go:89] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.873222  992056 system_pods.go:89] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.873244  992056 system_pods.go:126] duration metric: took 204.509108ms to wait for k8s-apps to be running ...
	I0314 19:29:54.873256  992056 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:29:54.873311  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:54.890288  992056 system_svc.go:56] duration metric: took 17.021036ms WaitForService to wait for kubelet
	I0314 19:29:54.890320  992056 kubeadm.go:576] duration metric: took 3.322477642s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:29:54.890347  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:29:55.069429  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:29:55.069458  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:29:55.069506  992056 node_conditions.go:105] duration metric: took 179.148222ms to run NodePressure ...
	I0314 19:29:55.069521  992056 start.go:240] waiting for startup goroutines ...
	I0314 19:29:55.069529  992056 start.go:245] waiting for cluster config update ...
	I0314 19:29:55.069543  992056 start.go:254] writing updated cluster config ...
	I0314 19:29:55.069881  992056 ssh_runner.go:195] Run: rm -f paused
	I0314 19:29:55.129829  992056 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:29:55.131816  992056 out.go:177] * Done! kubectl is now configured to use "embed-certs-992669" cluster and "default" namespace by default
	I0314 19:29:54.784171  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:57.281882  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:59.282486  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:01.282694  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:03.782088  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:05.785281  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:08.282878  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:10.782495  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:12.785319  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:11.911432  992563 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.474198952s)
	I0314 19:30:11.911536  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:11.930130  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:30:11.942380  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:30:11.954695  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:30:11.954724  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:30:11.954795  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:30:11.966696  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:30:11.966772  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:30:11.980074  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:30:11.991635  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:30:11.991728  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:30:12.004984  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.016196  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:30:12.016271  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.027974  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:30:12.039057  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:30:12.039110  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:30:12.050231  992563 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:30:12.272978  992563 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:30:15.284464  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:16.784336  991880 pod_ready.go:81] duration metric: took 4m0.008931629s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:30:16.784369  991880 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 19:30:16.784378  991880 pod_ready.go:38] duration metric: took 4m4.558023355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:16.784398  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:16.784436  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:16.784511  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:16.853550  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:16.853582  991880 cri.go:89] found id: ""
	I0314 19:30:16.853592  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:16.853657  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.858963  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:16.859036  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:16.920573  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:16.920607  991880 cri.go:89] found id: ""
	I0314 19:30:16.920618  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:16.920686  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.926133  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:16.926193  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:16.972150  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:16.972184  991880 cri.go:89] found id: ""
	I0314 19:30:16.972192  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:16.972276  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.979169  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:16.979247  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:17.028161  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:17.028191  991880 cri.go:89] found id: ""
	I0314 19:30:17.028202  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:17.028290  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.034573  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:17.034644  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:17.081031  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.081057  991880 cri.go:89] found id: ""
	I0314 19:30:17.081067  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:17.081132  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.086182  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:17.086254  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:17.124758  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.124793  991880 cri.go:89] found id: ""
	I0314 19:30:17.124804  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:17.124892  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.130576  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:17.130636  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:17.180055  991880 cri.go:89] found id: ""
	I0314 19:30:17.180088  991880 logs.go:276] 0 containers: []
	W0314 19:30:17.180100  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:17.180107  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:17.180174  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:17.227751  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.227785  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.227790  991880 cri.go:89] found id: ""
	I0314 19:30:17.227800  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:17.227859  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.232614  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.237357  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:17.237385  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.300841  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:17.300884  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.363775  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:17.363812  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.419276  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:17.419328  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.461722  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:17.461764  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:17.519105  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:17.519147  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:17.535065  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:17.535099  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:17.588603  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:17.588642  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:17.641770  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:17.641803  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:18.180497  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:18.180561  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:18.250700  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:18.250736  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:18.422627  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:18.422668  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:18.484021  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:18.484059  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.765051  992563 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:30:21.765146  992563 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:30:21.765261  992563 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:30:21.765420  992563 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:30:21.765550  992563 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:30:21.765636  992563 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:30:21.767199  992563 out.go:204]   - Generating certificates and keys ...
	I0314 19:30:21.767291  992563 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:30:21.767371  992563 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:30:21.767473  992563 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:30:21.767548  992563 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:30:21.767636  992563 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:30:21.767703  992563 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:30:21.767787  992563 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:30:21.767864  992563 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:30:21.767957  992563 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:30:21.768051  992563 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:30:21.768098  992563 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:30:21.768170  992563 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:30:21.768260  992563 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:30:21.768327  992563 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:30:21.768407  992563 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:30:21.768487  992563 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:30:21.768592  992563 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:30:21.768685  992563 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:30:21.769989  992563 out.go:204]   - Booting up control plane ...
	I0314 19:30:21.770111  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:30:21.770213  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:30:21.770295  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:30:21.770428  992563 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:30:21.770580  992563 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:30:21.770654  992563 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:30:21.770844  992563 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:30:21.770958  992563 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503303 seconds
	I0314 19:30:21.771087  992563 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:30:21.771238  992563 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:30:21.771320  992563 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:30:21.771547  992563 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-440341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:30:21.771634  992563 kubeadm.go:309] [bootstrap-token] Using token: tk83yg.fwvaicx2eo9i68ac
	I0314 19:30:21.773288  992563 out.go:204]   - Configuring RBAC rules ...
	I0314 19:30:21.773428  992563 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:30:21.773532  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:30:21.773732  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:30:21.773914  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:30:21.774068  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:30:21.774180  992563 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:30:21.774338  992563 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:30:21.774402  992563 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:30:21.774464  992563 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:30:21.774472  992563 kubeadm.go:309] 
	I0314 19:30:21.774577  992563 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:30:21.774601  992563 kubeadm.go:309] 
	I0314 19:30:21.774705  992563 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:30:21.774715  992563 kubeadm.go:309] 
	I0314 19:30:21.774744  992563 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:30:21.774833  992563 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:30:21.774914  992563 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:30:21.774930  992563 kubeadm.go:309] 
	I0314 19:30:21.775008  992563 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:30:21.775033  992563 kubeadm.go:309] 
	I0314 19:30:21.775102  992563 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:30:21.775112  992563 kubeadm.go:309] 
	I0314 19:30:21.775191  992563 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:30:21.775311  992563 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:30:21.775407  992563 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:30:21.775416  992563 kubeadm.go:309] 
	I0314 19:30:21.775537  992563 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:30:21.775654  992563 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:30:21.775664  992563 kubeadm.go:309] 
	I0314 19:30:21.775774  992563 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.775940  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:30:21.775971  992563 kubeadm.go:309] 	--control-plane 
	I0314 19:30:21.775977  992563 kubeadm.go:309] 
	I0314 19:30:21.776088  992563 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:30:21.776096  992563 kubeadm.go:309] 
	I0314 19:30:21.776235  992563 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.776419  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:30:21.776441  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:30:21.776451  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:30:21.778042  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:30:21.037583  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:21.055977  991880 api_server.go:72] duration metric: took 4m16.560286182s to wait for apiserver process to appear ...
	I0314 19:30:21.056002  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:21.056039  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:21.056088  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:21.101556  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:21.101583  991880 cri.go:89] found id: ""
	I0314 19:30:21.101591  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:21.101640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.107192  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:21.107259  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:21.156580  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:21.156608  991880 cri.go:89] found id: ""
	I0314 19:30:21.156619  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:21.156681  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.162119  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:21.162277  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:21.204270  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:21.204295  991880 cri.go:89] found id: ""
	I0314 19:30:21.204304  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:21.204369  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.208987  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:21.209057  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:21.258998  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.259019  991880 cri.go:89] found id: ""
	I0314 19:30:21.259029  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:21.259094  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.264179  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:21.264264  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:21.314180  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.314213  991880 cri.go:89] found id: ""
	I0314 19:30:21.314225  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:21.314293  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.319693  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:21.319758  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:21.364936  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.364974  991880 cri.go:89] found id: ""
	I0314 19:30:21.364987  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:21.365061  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.370463  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:21.370531  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:21.411930  991880 cri.go:89] found id: ""
	I0314 19:30:21.411963  991880 logs.go:276] 0 containers: []
	W0314 19:30:21.411974  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:21.411982  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:21.412053  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:21.467849  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:21.467875  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.467881  991880 cri.go:89] found id: ""
	I0314 19:30:21.467891  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:21.467954  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.474463  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.480322  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:21.480351  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.532746  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:21.532778  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.599065  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:21.599115  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.655522  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:21.655563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:22.097480  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:22.097521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:22.154520  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:22.154563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:22.175274  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:22.175312  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:22.302831  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:22.302865  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:22.353974  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:22.354017  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:22.392220  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:22.392263  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:22.433863  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:22.433893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:22.474014  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:22.474047  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:22.522023  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:22.522056  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:21.779300  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:30:21.841175  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:30:21.937053  992563 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:30:21.937114  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:21.937131  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-440341 minikube.k8s.io/updated_at=2024_03_14T19_30_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=default-k8s-diff-port-440341 minikube.k8s.io/primary=true
	I0314 19:30:22.169862  992563 ops.go:34] apiserver oom_adj: -16
	I0314 19:30:22.169890  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:22.670591  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.170361  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.670786  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.170313  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.670779  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.169961  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.670821  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:26.170263  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.081288  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:30:25.086127  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:30:25.087542  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:30:25.087569  991880 api_server.go:131] duration metric: took 4.031556019s to wait for apiserver health ...
	I0314 19:30:25.087578  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:25.087598  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:25.087646  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:25.136716  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.136743  991880 cri.go:89] found id: ""
	I0314 19:30:25.136754  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:25.136818  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.142319  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:25.142382  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:25.188007  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.188031  991880 cri.go:89] found id: ""
	I0314 19:30:25.188040  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:25.188098  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.192982  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:25.193056  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:25.235462  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.235485  991880 cri.go:89] found id: ""
	I0314 19:30:25.235493  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:25.235543  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.239980  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:25.240048  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:25.288524  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.288545  991880 cri.go:89] found id: ""
	I0314 19:30:25.288554  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:25.288604  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.294625  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:25.294680  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:25.332862  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:25.332884  991880 cri.go:89] found id: ""
	I0314 19:30:25.332891  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:25.332949  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.337918  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:25.337993  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:25.379537  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:25.379569  991880 cri.go:89] found id: ""
	I0314 19:30:25.379578  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:25.379640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.385396  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:25.385471  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:25.426549  991880 cri.go:89] found id: ""
	I0314 19:30:25.426584  991880 logs.go:276] 0 containers: []
	W0314 19:30:25.426596  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:25.426603  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:25.426676  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:25.468021  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:25.468048  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.468054  991880 cri.go:89] found id: ""
	I0314 19:30:25.468065  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:25.468134  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.473277  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.477669  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:25.477690  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:25.530477  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:25.530521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.586949  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:25.586985  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.629933  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:25.629972  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.675919  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:25.675955  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.724439  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:25.724477  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:25.790827  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:25.790864  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:25.808176  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:25.808223  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:25.925583  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:25.925621  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.972184  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:25.972237  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:26.018051  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:26.018083  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:26.080100  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:26.080141  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:26.117235  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:26.117276  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:29.005596  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:30:29.005628  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.005632  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.005636  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.005639  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.005642  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.005645  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.005651  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.005657  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.005665  991880 system_pods.go:74] duration metric: took 3.918081505s to wait for pod list to return data ...
	I0314 19:30:29.005672  991880 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:29.008145  991880 default_sa.go:45] found service account: "default"
	I0314 19:30:29.008172  991880 default_sa.go:55] duration metric: took 2.493629ms for default service account to be created ...
	I0314 19:30:29.008181  991880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:29.013603  991880 system_pods.go:86] 8 kube-system pods found
	I0314 19:30:29.013629  991880 system_pods.go:89] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.013641  991880 system_pods.go:89] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.013646  991880 system_pods.go:89] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.013650  991880 system_pods.go:89] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.013654  991880 system_pods.go:89] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.013658  991880 system_pods.go:89] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.013665  991880 system_pods.go:89] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.013673  991880 system_pods.go:89] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.013683  991880 system_pods.go:126] duration metric: took 5.49627ms to wait for k8s-apps to be running ...
	I0314 19:30:29.013692  991880 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:29.013744  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:29.033211  991880 system_svc.go:56] duration metric: took 19.509127ms WaitForService to wait for kubelet
	I0314 19:30:29.033244  991880 kubeadm.go:576] duration metric: took 4m24.537554048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:29.033262  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:29.036387  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:29.036409  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:29.036419  991880 node_conditions.go:105] duration metric: took 3.152496ms to run NodePressure ...
	I0314 19:30:29.036432  991880 start.go:240] waiting for startup goroutines ...
	I0314 19:30:29.036441  991880 start.go:245] waiting for cluster config update ...
	I0314 19:30:29.036455  991880 start.go:254] writing updated cluster config ...
	I0314 19:30:29.036755  991880 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:29.086638  991880 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 19:30:29.088767  991880 out.go:177] * Done! kubectl is now configured to use "no-preload-731976" cluster and "default" namespace by default
	I0314 19:30:26.670634  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.170774  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.670460  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.170571  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.670199  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.170324  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.670849  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.170021  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.670974  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.170929  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.670790  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.170127  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.670598  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.170188  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.670057  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.800524  992563 kubeadm.go:1106] duration metric: took 11.863480183s to wait for elevateKubeSystemPrivileges
	W0314 19:30:33.800567  992563 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:30:33.800577  992563 kubeadm.go:393] duration metric: took 5m9.94050972s to StartCluster
	I0314 19:30:33.800600  992563 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.800688  992563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:30:33.802311  992563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.802593  992563 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:30:33.804369  992563 out.go:177] * Verifying Kubernetes components...
	I0314 19:30:33.802658  992563 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:30:33.802827  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:30:33.806017  992563 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806030  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:30:33.806039  992563 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806035  992563 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806064  992563 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-440341"
	I0314 19:30:33.806070  992563 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.806078  992563 addons.go:243] addon storage-provisioner should already be in state true
	W0314 19:30:33.806079  992563 addons.go:243] addon metrics-server should already be in state true
	I0314 19:30:33.806077  992563 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-440341"
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806494  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806502  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806518  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806535  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806588  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806621  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.822764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0314 19:30:33.823247  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.823804  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.823832  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.824297  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.824872  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.824921  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.826625  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0314 19:30:33.826764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0314 19:30:33.827172  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827235  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827776  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827802  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.827915  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827936  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.828247  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.828442  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.829152  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.829934  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.829979  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.832093  992563 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.832117  992563 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:30:33.832150  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.832523  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.832567  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.847051  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I0314 19:30:33.847665  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.848345  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.848364  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.848731  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.848903  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.850545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.852435  992563 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:30:33.851181  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I0314 19:30:33.852606  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44445
	I0314 19:30:33.853975  992563 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:33.853991  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:30:33.854010  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.852808  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854344  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854857  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.854878  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855189  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.855207  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855286  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855613  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855925  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.856281  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.856302  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.857423  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.857734  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.859391  992563 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:30:33.858135  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.858383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.860539  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:30:33.860554  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:30:33.860561  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.860569  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.860651  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.860790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.860935  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.862967  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863319  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.863339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863428  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.863627  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.863738  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.863908  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.880826  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I0314 19:30:33.881345  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.881799  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.881818  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.882187  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.882341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.884263  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.884589  992563 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:33.884607  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:30:33.884625  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.887557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.887921  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.887945  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.888190  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.888503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.888670  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.888773  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:34.034473  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:30:34.090558  992563 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129103  992563 node_ready.go:49] node "default-k8s-diff-port-440341" has status "Ready":"True"
	I0314 19:30:34.129135  992563 node_ready.go:38] duration metric: took 38.535795ms for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129148  992563 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:34.137612  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:34.186085  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:30:34.186105  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:30:34.218932  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:34.220858  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:30:34.220881  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:30:34.235535  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:34.356161  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:34.356196  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:30:34.486555  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:36.162952  992563 pod_ready.go:92] pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.162991  992563 pod_ready.go:81] duration metric: took 2.025345367s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.163005  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171143  992563 pod_ready.go:92] pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.171227  992563 pod_ready.go:81] duration metric: took 8.211826ms for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171254  992563 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182222  992563 pod_ready.go:92] pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.182246  992563 pod_ready.go:81] duration metric: took 10.963779ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182255  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196349  992563 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.196375  992563 pod_ready.go:81] duration metric: took 14.113911ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196385  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201427  992563 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.201448  992563 pod_ready.go:81] duration metric: took 5.056279ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201456  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.470967  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.235390903s)
	I0314 19:30:36.471092  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471113  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471179  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.252178888s)
	I0314 19:30:36.471229  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471250  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471529  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471565  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471565  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471576  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471583  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471605  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471626  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471589  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471854  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471876  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.472161  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.472167  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.472186  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.491529  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.491557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.491867  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.491887  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.546393  992563 pod_ready.go:92] pod "kube-proxy-h7hdc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.546418  992563 pod_ready.go:81] duration metric: took 344.955471ms for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.546427  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.619091  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.132488028s)
	I0314 19:30:36.619147  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619165  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619443  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619459  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619469  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619477  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619809  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619839  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619847  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.619851  992563 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-440341"
	I0314 19:30:36.621595  992563 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 19:30:36.622935  992563 addons.go:505] duration metric: took 2.820276683s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 19:30:36.950079  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.950112  992563 pod_ready.go:81] duration metric: took 403.67651ms for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.950124  992563 pod_ready.go:38] duration metric: took 2.820962547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:36.950145  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:36.950212  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:37.024696  992563 api_server.go:72] duration metric: took 3.222061457s to wait for apiserver process to appear ...
	I0314 19:30:37.024728  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:37.024754  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:30:37.031369  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:30:37.033114  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:30:37.033137  992563 api_server.go:131] duration metric: took 8.40225ms to wait for apiserver health ...
	I0314 19:30:37.033145  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:37.157219  992563 system_pods.go:59] 9 kube-system pods found
	I0314 19:30:37.157256  992563 system_pods.go:61] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.157263  992563 system_pods.go:61] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.157269  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.157276  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.157282  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.157286  992563 system_pods.go:61] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.157291  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.157300  992563 system_pods.go:61] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.157308  992563 system_pods.go:61] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.157323  992563 system_pods.go:74] duration metric: took 124.170301ms to wait for pod list to return data ...
	I0314 19:30:37.157336  992563 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:37.343573  992563 default_sa.go:45] found service account: "default"
	I0314 19:30:37.343602  992563 default_sa.go:55] duration metric: took 186.253477ms for default service account to be created ...
	I0314 19:30:37.343620  992563 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:37.549907  992563 system_pods.go:86] 9 kube-system pods found
	I0314 19:30:37.549947  992563 system_pods.go:89] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.549955  992563 system_pods.go:89] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.549962  992563 system_pods.go:89] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.549969  992563 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.549977  992563 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.549982  992563 system_pods.go:89] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.549987  992563 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.549998  992563 system_pods.go:89] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.550007  992563 system_pods.go:89] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.550022  992563 system_pods.go:126] duration metric: took 206.393584ms to wait for k8s-apps to be running ...
	I0314 19:30:37.550039  992563 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:37.550098  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:37.568337  992563 system_svc.go:56] duration metric: took 18.290339ms WaitForService to wait for kubelet
	I0314 19:30:37.568369  992563 kubeadm.go:576] duration metric: took 3.765742034s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:37.568396  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:37.747892  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:37.747931  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:37.747945  992563 node_conditions.go:105] duration metric: took 179.543321ms to run NodePressure ...
	I0314 19:30:37.747959  992563 start.go:240] waiting for startup goroutines ...
	I0314 19:30:37.747969  992563 start.go:245] waiting for cluster config update ...
	I0314 19:30:37.747984  992563 start.go:254] writing updated cluster config ...
	I0314 19:30:37.748310  992563 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:37.800491  992563 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:30:37.802410  992563 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-440341" cluster and "default" namespace by default
	I0314 19:31:02.414037  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:31:02.414153  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:31:02.415801  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:02.415891  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:02.415997  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:02.416110  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:02.416236  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:02.416324  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:02.418205  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:02.418304  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:02.418377  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:02.418455  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:02.418519  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:02.418629  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:02.418704  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:02.418793  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:02.418892  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:02.419018  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:02.419129  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:02.419184  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:02.419270  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:02.419347  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:02.419421  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:02.419528  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:02.419624  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:02.419808  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:02.419914  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:02.419951  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:02.420007  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:02.421520  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:02.421603  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:02.421669  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:02.421753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:02.421844  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:02.422023  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:02.422092  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:02.422167  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422353  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422458  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422731  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422812  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422970  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423032  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423228  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423333  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423479  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423488  992344 kubeadm.go:309] 
	I0314 19:31:02.423519  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:31:02.423552  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:31:02.423558  992344 kubeadm.go:309] 
	I0314 19:31:02.423601  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:31:02.423643  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:31:02.423770  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:31:02.423780  992344 kubeadm.go:309] 
	I0314 19:31:02.423912  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:31:02.423949  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:31:02.424001  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:31:02.424012  992344 kubeadm.go:309] 
	I0314 19:31:02.424141  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:31:02.424269  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:31:02.424280  992344 kubeadm.go:309] 
	I0314 19:31:02.424405  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:31:02.424481  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:31:02.424542  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:31:02.424606  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:31:02.424638  992344 kubeadm.go:309] 
	W0314 19:31:02.424800  992344 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 19:31:02.424887  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:31:03.827325  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.402406647s)
	I0314 19:31:03.827421  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:31:03.845125  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:31:03.856796  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:31:03.856821  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:31:03.856875  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:31:03.868304  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:31:03.868359  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:31:03.879608  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:31:03.891002  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:31:03.891068  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:31:03.902543  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.913159  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:31:03.913212  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.926194  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:31:03.937276  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:31:03.937344  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:31:03.949719  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:31:04.026772  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:04.026841  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:04.195658  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:04.195816  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:04.195973  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:04.416776  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:04.418845  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:04.418937  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:04.419023  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:04.419125  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:04.419222  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:04.419321  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:04.419386  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:04.419869  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:04.420376  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:04.420786  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:04.421265  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:04.421447  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:04.421551  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:04.472916  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:04.572160  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:04.802131  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:04.892115  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:04.908810  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:04.910191  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:04.910266  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:05.076124  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:05.078423  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:05.078564  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:05.083626  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:05.083753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:05.084096  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:05.088164  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:45.090977  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:45.091099  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:45.091378  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:50.091571  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:50.091787  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:00.093031  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:00.093312  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:20.094443  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:20.094650  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096632  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:33:00.096929  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096948  992344 kubeadm.go:309] 
	I0314 19:33:00.096986  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:33:00.097021  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:33:00.097030  992344 kubeadm.go:309] 
	I0314 19:33:00.097059  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:33:00.097088  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:33:00.097203  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:33:00.097228  992344 kubeadm.go:309] 
	I0314 19:33:00.097345  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:33:00.097394  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:33:00.097451  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:33:00.097461  992344 kubeadm.go:309] 
	I0314 19:33:00.097572  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:33:00.097673  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:33:00.097685  992344 kubeadm.go:309] 
	I0314 19:33:00.097865  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:33:00.098003  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:33:00.098105  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:33:00.098202  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:33:00.098221  992344 kubeadm.go:309] 
	I0314 19:33:00.098939  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:33:00.099069  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:33:00.099160  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:33:00.099254  992344 kubeadm.go:393] duration metric: took 7m59.845612375s to StartCluster
	I0314 19:33:00.099339  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:33:00.099422  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:33:00.151833  992344 cri.go:89] found id: ""
	I0314 19:33:00.151861  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.151869  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:33:00.151876  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:33:00.151943  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:33:00.196473  992344 cri.go:89] found id: ""
	I0314 19:33:00.196508  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.196519  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:33:00.196526  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:33:00.196595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:33:00.233150  992344 cri.go:89] found id: ""
	I0314 19:33:00.233193  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.233207  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:33:00.233217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:33:00.233292  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:33:00.273142  992344 cri.go:89] found id: ""
	I0314 19:33:00.273183  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.273196  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:33:00.273205  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:33:00.273274  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:33:00.311472  992344 cri.go:89] found id: ""
	I0314 19:33:00.311510  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.311523  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:33:00.311544  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:33:00.311618  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:33:00.352110  992344 cri.go:89] found id: ""
	I0314 19:33:00.352138  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.352146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:33:00.352152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:33:00.352230  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:33:00.399016  992344 cri.go:89] found id: ""
	I0314 19:33:00.399050  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.399060  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:33:00.399068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:33:00.399140  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:33:00.436808  992344 cri.go:89] found id: ""
	I0314 19:33:00.436844  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.436857  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:33:00.436871  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:33:00.436889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:33:00.487696  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:33:00.487732  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:33:00.503591  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:33:00.503624  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:33:00.586980  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:33:00.587014  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:33:00.587033  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:33:00.697747  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:33:00.697805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 19:33:00.767728  992344 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 19:33:00.767799  992344 out.go:239] * 
	W0314 19:33:00.768013  992344 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.768052  992344 out.go:239] * 
	W0314 19:33:00.769333  992344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:33:00.772897  992344 out.go:177] 
	W0314 19:33:00.774102  992344 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.774165  992344 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 19:33:00.774200  992344 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 19:33:00.775839  992344 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.320895950Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445137320873405,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=771c91b0-0b5b-4912-8e81-b35d76bb5789 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.321413449Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=bbc33148-3bd2-4218-ab50-1ac664ae827e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.321465222Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=bbc33148-3bd2-4218-ab50-1ac664ae827e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.321752373Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e468421ed202cd300bb4d6659e1cd2405ae7fc04ad047d6df03203c2c35779,PodSandboxId:590e1e020909544e80888385a3a6e9a4a65ab091fb2e3fbf7817d108c5440b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444594414417209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f65c725-e834-45db-a417-fd47b421c883,},Annotations:map[string]string{io.kubernetes.container.hash: e977df94,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9523d7c5c9a7abfa65d76f957050e677a374c5ed0c5a7440a11cc619618619ea,PodSandboxId:fccdedb983d8afb40fe6a2baa347240db88df69d64ef19adec025bf96d2ca5a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444592078861422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tn7lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf62479b-d5f9-4020-950d-8f3d71e952fa,},Annotations:map[string]string{io.kubernetes.container.hash: 336fe2df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49df2538e8babc977ed31e995dbc173abd54f4d40f8aef4c59ecc4d80fe64ad,PodSandboxId:ef507fb570b82d812f74134bab74d57a9d64e8d03c785d6110c30f4e749002ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444591936782447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ngbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
85a72f9-bb81-4f35-97ec-585c80194c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6764b19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aeb16f6338f306b68a150304056969afbb7d0e130b07af8c684fdc7f6ecbe7b,PodSandboxId:9416359048f14476fd5234b4e5febc2c799ebe7972fc02bcfd4f622978d7df3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:
1710444591423864273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzhsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac20e54-9d37-4f3b-a71a-e92c03f806d8,},Annotations:map[string]string{io.kubernetes.container.hash: 95874fd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8482deec524df7fda52136063ae25badc395936701cdb893f30a725340538525,PodSandboxId:a9f7e3967198d453dd990cd3d1ada572b394948fe26adafee07216cb426ef5d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444572230391610,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e53c0267e65fbc4272161062157c5,},Annotations:map[string]string{io.kubernetes.container.hash: 7ef207eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78ee2c608f479e9e330834dd9e001ff560a217bcba64234cf10afc4127dd944,PodSandboxId:53276463df34470726116efabfa927cb36781bd77bd0c17a04d09fba5a6e6888,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444572128919308,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594604683b15223e7fbb56240aa2e26c003c5d84c8ea70cdab7f26a230880e19,PodSandboxId:9e3bc57ca56fe559b7b2fb0b2a72f7013276bb329eb2a1cf22c32d9b909987b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444572123630291,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4a93aa8b6b303c7fe944e4adfcf7f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8e561bab4cec6ce9ca188e60d2d8ab908139face20a893beba3ba24fcfe27f,PodSandboxId:d980318214a5bc7496f7e252be6bf4d97e7f8c3c354ccb5ea83fd920961a3705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444572036609972,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96dbdba166be5bd3f2714cba5cf85166,},Annotations:map[string]string{io.kubernetes.container.hash: da291ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=bbc33148-3bd2-4218-ab50-1ac664ae827e name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.369641106Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=71eec4db-e106-4233-be31-00cf5e4b05a1 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.369712456Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=71eec4db-e106-4233-be31-00cf5e4b05a1 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.373501504Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6d963c51-e993-400c-86cb-67585dc45974 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.373900871Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445137373880726,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6d963c51-e993-400c-86cb-67585dc45974 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.374914730Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=87ad746e-3126-4755-8823-d16112589302 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.374971546Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=87ad746e-3126-4755-8823-d16112589302 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.375163160Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e468421ed202cd300bb4d6659e1cd2405ae7fc04ad047d6df03203c2c35779,PodSandboxId:590e1e020909544e80888385a3a6e9a4a65ab091fb2e3fbf7817d108c5440b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444594414417209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f65c725-e834-45db-a417-fd47b421c883,},Annotations:map[string]string{io.kubernetes.container.hash: e977df94,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9523d7c5c9a7abfa65d76f957050e677a374c5ed0c5a7440a11cc619618619ea,PodSandboxId:fccdedb983d8afb40fe6a2baa347240db88df69d64ef19adec025bf96d2ca5a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444592078861422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tn7lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf62479b-d5f9-4020-950d-8f3d71e952fa,},Annotations:map[string]string{io.kubernetes.container.hash: 336fe2df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49df2538e8babc977ed31e995dbc173abd54f4d40f8aef4c59ecc4d80fe64ad,PodSandboxId:ef507fb570b82d812f74134bab74d57a9d64e8d03c785d6110c30f4e749002ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444591936782447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ngbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
85a72f9-bb81-4f35-97ec-585c80194c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6764b19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aeb16f6338f306b68a150304056969afbb7d0e130b07af8c684fdc7f6ecbe7b,PodSandboxId:9416359048f14476fd5234b4e5febc2c799ebe7972fc02bcfd4f622978d7df3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:
1710444591423864273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzhsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac20e54-9d37-4f3b-a71a-e92c03f806d8,},Annotations:map[string]string{io.kubernetes.container.hash: 95874fd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8482deec524df7fda52136063ae25badc395936701cdb893f30a725340538525,PodSandboxId:a9f7e3967198d453dd990cd3d1ada572b394948fe26adafee07216cb426ef5d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444572230391610,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e53c0267e65fbc4272161062157c5,},Annotations:map[string]string{io.kubernetes.container.hash: 7ef207eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78ee2c608f479e9e330834dd9e001ff560a217bcba64234cf10afc4127dd944,PodSandboxId:53276463df34470726116efabfa927cb36781bd77bd0c17a04d09fba5a6e6888,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444572128919308,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594604683b15223e7fbb56240aa2e26c003c5d84c8ea70cdab7f26a230880e19,PodSandboxId:9e3bc57ca56fe559b7b2fb0b2a72f7013276bb329eb2a1cf22c32d9b909987b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444572123630291,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4a93aa8b6b303c7fe944e4adfcf7f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8e561bab4cec6ce9ca188e60d2d8ab908139face20a893beba3ba24fcfe27f,PodSandboxId:d980318214a5bc7496f7e252be6bf4d97e7f8c3c354ccb5ea83fd920961a3705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444572036609972,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96dbdba166be5bd3f2714cba5cf85166,},Annotations:map[string]string{io.kubernetes.container.hash: da291ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=87ad746e-3126-4755-8823-d16112589302 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.418805276Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=7c7fa5cc-6046-4b08-b436-c1ecda351b80 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.418881233Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=7c7fa5cc-6046-4b08-b436-c1ecda351b80 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.420759285Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=40bb8fa9-c881-4864-b8c4-542206a366e9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.421151912Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445137421130255,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=40bb8fa9-c881-4864-b8c4-542206a366e9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.422005297Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=2cd1efc0-3fe4-4e95-b491-14a0c9d1e7b9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.422055535Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=2cd1efc0-3fe4-4e95-b491-14a0c9d1e7b9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.422256946Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e468421ed202cd300bb4d6659e1cd2405ae7fc04ad047d6df03203c2c35779,PodSandboxId:590e1e020909544e80888385a3a6e9a4a65ab091fb2e3fbf7817d108c5440b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444594414417209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f65c725-e834-45db-a417-fd47b421c883,},Annotations:map[string]string{io.kubernetes.container.hash: e977df94,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9523d7c5c9a7abfa65d76f957050e677a374c5ed0c5a7440a11cc619618619ea,PodSandboxId:fccdedb983d8afb40fe6a2baa347240db88df69d64ef19adec025bf96d2ca5a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444592078861422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tn7lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf62479b-d5f9-4020-950d-8f3d71e952fa,},Annotations:map[string]string{io.kubernetes.container.hash: 336fe2df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49df2538e8babc977ed31e995dbc173abd54f4d40f8aef4c59ecc4d80fe64ad,PodSandboxId:ef507fb570b82d812f74134bab74d57a9d64e8d03c785d6110c30f4e749002ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444591936782447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ngbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
85a72f9-bb81-4f35-97ec-585c80194c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6764b19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aeb16f6338f306b68a150304056969afbb7d0e130b07af8c684fdc7f6ecbe7b,PodSandboxId:9416359048f14476fd5234b4e5febc2c799ebe7972fc02bcfd4f622978d7df3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:
1710444591423864273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzhsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac20e54-9d37-4f3b-a71a-e92c03f806d8,},Annotations:map[string]string{io.kubernetes.container.hash: 95874fd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8482deec524df7fda52136063ae25badc395936701cdb893f30a725340538525,PodSandboxId:a9f7e3967198d453dd990cd3d1ada572b394948fe26adafee07216cb426ef5d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444572230391610,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e53c0267e65fbc4272161062157c5,},Annotations:map[string]string{io.kubernetes.container.hash: 7ef207eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78ee2c608f479e9e330834dd9e001ff560a217bcba64234cf10afc4127dd944,PodSandboxId:53276463df34470726116efabfa927cb36781bd77bd0c17a04d09fba5a6e6888,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444572128919308,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594604683b15223e7fbb56240aa2e26c003c5d84c8ea70cdab7f26a230880e19,PodSandboxId:9e3bc57ca56fe559b7b2fb0b2a72f7013276bb329eb2a1cf22c32d9b909987b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444572123630291,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4a93aa8b6b303c7fe944e4adfcf7f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8e561bab4cec6ce9ca188e60d2d8ab908139face20a893beba3ba24fcfe27f,PodSandboxId:d980318214a5bc7496f7e252be6bf4d97e7f8c3c354ccb5ea83fd920961a3705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444572036609972,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96dbdba166be5bd3f2714cba5cf85166,},Annotations:map[string]string{io.kubernetes.container.hash: da291ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=2cd1efc0-3fe4-4e95-b491-14a0c9d1e7b9 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.465558524Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=f7cd300b-d201-444a-aa7e-dc35bd47fda0 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.466339558Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=f7cd300b-d201-444a-aa7e-dc35bd47fda0 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.478670511Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=75a39efd-a81e-4aca-b6dc-99e1571449e2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.479049938Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445137479030028,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=75a39efd-a81e-4aca-b6dc-99e1571449e2 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.480235202Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=98a495ca-1da8-4eca-81ab-583014fd4637 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.480845792Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=98a495ca-1da8-4eca-81ab-583014fd4637 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:38:57 embed-certs-992669 crio[696]: time="2024-03-14 19:38:57.481021522Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e468421ed202cd300bb4d6659e1cd2405ae7fc04ad047d6df03203c2c35779,PodSandboxId:590e1e020909544e80888385a3a6e9a4a65ab091fb2e3fbf7817d108c5440b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444594414417209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f65c725-e834-45db-a417-fd47b421c883,},Annotations:map[string]string{io.kubernetes.container.hash: e977df94,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9523d7c5c9a7abfa65d76f957050e677a374c5ed0c5a7440a11cc619618619ea,PodSandboxId:fccdedb983d8afb40fe6a2baa347240db88df69d64ef19adec025bf96d2ca5a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444592078861422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tn7lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf62479b-d5f9-4020-950d-8f3d71e952fa,},Annotations:map[string]string{io.kubernetes.container.hash: 336fe2df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49df2538e8babc977ed31e995dbc173abd54f4d40f8aef4c59ecc4d80fe64ad,PodSandboxId:ef507fb570b82d812f74134bab74d57a9d64e8d03c785d6110c30f4e749002ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444591936782447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ngbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
85a72f9-bb81-4f35-97ec-585c80194c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6764b19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aeb16f6338f306b68a150304056969afbb7d0e130b07af8c684fdc7f6ecbe7b,PodSandboxId:9416359048f14476fd5234b4e5febc2c799ebe7972fc02bcfd4f622978d7df3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:
1710444591423864273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzhsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac20e54-9d37-4f3b-a71a-e92c03f806d8,},Annotations:map[string]string{io.kubernetes.container.hash: 95874fd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8482deec524df7fda52136063ae25badc395936701cdb893f30a725340538525,PodSandboxId:a9f7e3967198d453dd990cd3d1ada572b394948fe26adafee07216cb426ef5d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444572230391610,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e53c0267e65fbc4272161062157c5,},Annotations:map[string]string{io.kubernetes.container.hash: 7ef207eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78ee2c608f479e9e330834dd9e001ff560a217bcba64234cf10afc4127dd944,PodSandboxId:53276463df34470726116efabfa927cb36781bd77bd0c17a04d09fba5a6e6888,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444572128919308,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594604683b15223e7fbb56240aa2e26c003c5d84c8ea70cdab7f26a230880e19,PodSandboxId:9e3bc57ca56fe559b7b2fb0b2a72f7013276bb329eb2a1cf22c32d9b909987b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444572123630291,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4a93aa8b6b303c7fe944e4adfcf7f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8e561bab4cec6ce9ca188e60d2d8ab908139face20a893beba3ba24fcfe27f,PodSandboxId:d980318214a5bc7496f7e252be6bf4d97e7f8c3c354ccb5ea83fd920961a3705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444572036609972,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96dbdba166be5bd3f2714cba5cf85166,},Annotations:map[string]string{io.kubernetes.container.hash: da291ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=98a495ca-1da8-4eca-81ab-583014fd4637 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	86e468421ed20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   590e1e0209095       storage-provisioner
	9523d7c5c9a7a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   fccdedb983d8a       coredns-5dd5756b68-tn7lt
	a49df2538e8ba       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   ef507fb570b82       coredns-5dd5756b68-ngbmj
	7aeb16f6338f3       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   9416359048f14       kube-proxy-hzhsp
	8482deec524df       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   a9f7e3967198d       etcd-embed-certs-992669
	d78ee2c608f47       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   53276463df344       kube-controller-manager-embed-certs-992669
	594604683b152       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   9e3bc57ca56fe       kube-scheduler-embed-certs-992669
	6b8e561bab4ce       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   d980318214a5b       kube-apiserver-embed-certs-992669
	
	
	==> coredns [9523d7c5c9a7abfa65d76f957050e677a374c5ed0c5a7440a11cc619618619ea] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [a49df2538e8babc977ed31e995dbc173abd54f4d40f8aef4c59ecc4d80fe64ad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-992669
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-992669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=embed-certs-992669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T19_29_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:29:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-992669
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:38:49 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:35:05 +0000   Thu, 14 Mar 2024 19:29:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:35:05 +0000   Thu, 14 Mar 2024 19:29:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:35:05 +0000   Thu, 14 Mar 2024 19:29:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:35:05 +0000   Thu, 14 Mar 2024 19:29:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.213
	  Hostname:    embed-certs-992669
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 56960399f5484e4ba79a009d8c003f55
	  System UUID:                56960399-f548-4e4b-a79a-009d8c003f55
	  Boot ID:                    285ed688-7e70-459d-973c-72febf3ebd7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-ngbmj                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 coredns-5dd5756b68-tn7lt                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-embed-certs-992669                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-apiserver-embed-certs-992669             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 kube-controller-manager-embed-certs-992669    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-hzhsp                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-embed-certs-992669             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 metrics-server-57f55c9bc5-kr2n6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age    From             Message
	  ----    ------                   ----   ----             -------
	  Normal  Starting                 9m5s   kube-proxy       
	  Normal  Starting                 9m19s  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s  kubelet          Node embed-certs-992669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s  kubelet          Node embed-certs-992669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s  kubelet          Node embed-certs-992669 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             9m18s  kubelet          Node embed-certs-992669 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  9m18s  kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                9m18s  kubelet          Node embed-certs-992669 status is now: NodeReady
	  Normal  RegisteredNode           9m7s   node-controller  Node embed-certs-992669 event: Registered Node embed-certs-992669 in Controller
	
	
	==> dmesg <==
	[  +0.053279] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042657] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.561111] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.444900] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.752980] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.673703] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.066816] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066013] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.183521] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.145596] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.266229] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +5.434659] systemd-fstab-generator[776]: Ignoring "noauto" option for root device
	[  +0.064559] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.906116] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +5.633473] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.870543] kauditd_printk_skb: 74 callbacks suppressed
	[Mar14 19:29] kauditd_printk_skb: 3 callbacks suppressed
	[  +2.035177] systemd-fstab-generator[3402]: Ignoring "noauto" option for root device
	[  +4.695065] kauditd_printk_skb: 57 callbacks suppressed
	[  +3.097420] systemd-fstab-generator[3722]: Ignoring "noauto" option for root device
	[ +12.785966] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.318935] systemd-fstab-generator[4056]: Ignoring "noauto" option for root device
	[Mar14 19:30] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [8482deec524df7fda52136063ae25badc395936701cdb893f30a725340538525] <==
	{"level":"info","ts":"2024-03-14T19:29:32.723893Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.50.213:2380"}
	{"level":"info","ts":"2024-03-14T19:29:32.72636Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap.db","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T19:29:32.726435Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T19:29:32.726474Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T19:29:32.726603Z","caller":"etcdserver/server.go:738","msg":"started as single-node; fast-forwarding election ticks","local-member-id":"afd31c34526e5864","forward-ticks":9,"forward-duration":"900ms","election-ticks":10,"election-timeout":"1s"}
	{"level":"info","ts":"2024-03-14T19:29:32.726765Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 switched to configuration voters=(12669501187770177636)"}
	{"level":"info","ts":"2024-03-14T19:29:32.726896Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"64fdbb8e23141dc5","local-member-id":"afd31c34526e5864","added-peer-id":"afd31c34526e5864","added-peer-peer-urls":["https://192.168.50.213:2380"]}
	{"level":"info","ts":"2024-03-14T19:29:33.353389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-14T19:29:33.353582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-14T19:29:33.353671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 received MsgPreVoteResp from afd31c34526e5864 at term 1"}
	{"level":"info","ts":"2024-03-14T19:29:33.353735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became candidate at term 2"}
	{"level":"info","ts":"2024-03-14T19:29:33.353763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 received MsgVoteResp from afd31c34526e5864 at term 2"}
	{"level":"info","ts":"2024-03-14T19:29:33.353886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became leader at term 2"}
	{"level":"info","ts":"2024-03-14T19:29:33.353915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: afd31c34526e5864 elected leader afd31c34526e5864 at term 2"}
	{"level":"info","ts":"2024-03-14T19:29:33.358896Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:29:33.363123Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"afd31c34526e5864","local-member-attributes":"{Name:embed-certs-992669 ClientURLs:[https://192.168.50.213:2379]}","request-path":"/0/members/afd31c34526e5864/attributes","cluster-id":"64fdbb8e23141dc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T19:29:33.36357Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"64fdbb8e23141dc5","local-member-id":"afd31c34526e5864","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:29:33.364638Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:29:33.376393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:29:33.381391Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:29:33.381471Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:29:33.37933Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T19:29:33.383389Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T19:29:33.381364Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.213:2379"}
	{"level":"info","ts":"2024-03-14T19:29:33.396155Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	
	
	==> kernel <==
	 19:38:57 up 14 min,  0 users,  load average: 0.06, 0.09, 0.08
	Linux embed-certs-992669 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6b8e561bab4cec6ce9ca188e60d2d8ab908139face20a893beba3ba24fcfe27f] <==
	W0314 19:34:36.206683       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:34:36.206743       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:34:36.206752       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:34:36.206846       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:34:36.207087       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:34:36.207946       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 19:35:35.113509       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 19:35:36.207746       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:35:36.207810       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:35:36.207820       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:35:36.209044       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:35:36.209141       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:35:36.209150       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 19:36:35.113903       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 19:37:35.113002       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 19:37:36.208670       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:37:36.208715       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:37:36.208725       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:37:36.209987       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:37:36.210095       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:37:36.210103       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 19:38:35.113013       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [d78ee2c608f479e9e330834dd9e001ff560a217bcba64234cf10afc4127dd944] <==
	I0314 19:33:24.982617       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="233.328µs"
	E0314 19:33:51.022943       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:33:51.475198       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:34:21.028494       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:34:21.484604       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:34:51.035501       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:34:51.497007       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:35:21.041214       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:35:21.508599       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:35:51.048107       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:35:51.519415       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 19:35:59.984088       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="415.439µs"
	I0314 19:36:12.984770       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="245.568µs"
	E0314 19:36:21.054606       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:36:21.528191       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:36:51.061025       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:36:51.538188       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:37:21.067958       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:37:21.547919       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:37:51.074715       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:37:51.559215       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:38:21.081002       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:38:21.568913       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:38:51.087818       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:38:51.577517       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7aeb16f6338f306b68a150304056969afbb7d0e130b07af8c684fdc7f6ecbe7b] <==
	I0314 19:29:52.039986       1 server_others.go:69] "Using iptables proxy"
	I0314 19:29:52.069127       1 node.go:141] Successfully retrieved node IP: 192.168.50.213
	I0314 19:29:52.163005       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:29:52.163029       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:29:52.215755       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:29:52.215832       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:29:52.215997       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:29:52.216006       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:29:52.223194       1 config.go:188] "Starting service config controller"
	I0314 19:29:52.227518       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:29:52.227631       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:29:52.227643       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:29:52.235701       1 config.go:315] "Starting node config controller"
	I0314 19:29:52.235712       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:29:52.331541       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:29:52.427746       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:29:52.435843       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [594604683b15223e7fbb56240aa2e26c003c5d84c8ea70cdab7f26a230880e19] <==
	W0314 19:29:35.323739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 19:29:35.324025       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 19:29:35.324160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 19:29:35.324197       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 19:29:36.123605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 19:29:36.123773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 19:29:36.138643       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 19:29:36.138710       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 19:29:36.153611       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 19:29:36.153954       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 19:29:36.178046       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 19:29:36.178126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 19:29:36.236716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 19:29:36.236935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 19:29:36.255427       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 19:29:36.255483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 19:29:36.258435       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 19:29:36.258500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 19:29:36.314951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 19:29:36.315009       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 19:29:36.330327       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 19:29:36.330503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 19:29:36.393740       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 19:29:36.393789       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:29:39.289880       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 19:36:39 embed-certs-992669 kubelet[3729]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:36:39 embed-certs-992669 kubelet[3729]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:36:39 embed-certs-992669 kubelet[3729]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:36:39 embed-certs-992669 kubelet[3729]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:36:48 embed-certs-992669 kubelet[3729]: E0314 19:36:48.966946    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:36:59 embed-certs-992669 kubelet[3729]: E0314 19:36:59.965176    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:37:13 embed-certs-992669 kubelet[3729]: E0314 19:37:13.965223    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:37:26 embed-certs-992669 kubelet[3729]: E0314 19:37:26.968700    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:37:37 embed-certs-992669 kubelet[3729]: E0314 19:37:37.965706    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:37:39 embed-certs-992669 kubelet[3729]: E0314 19:37:39.100649    3729 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:37:39 embed-certs-992669 kubelet[3729]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:37:39 embed-certs-992669 kubelet[3729]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:37:39 embed-certs-992669 kubelet[3729]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:37:39 embed-certs-992669 kubelet[3729]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:37:52 embed-certs-992669 kubelet[3729]: E0314 19:37:52.965815    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:38:07 embed-certs-992669 kubelet[3729]: E0314 19:38:07.965490    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:38:19 embed-certs-992669 kubelet[3729]: E0314 19:38:19.964808    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:38:30 embed-certs-992669 kubelet[3729]: E0314 19:38:30.965802    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:38:39 embed-certs-992669 kubelet[3729]: E0314 19:38:39.100016    3729 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:38:39 embed-certs-992669 kubelet[3729]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:38:39 embed-certs-992669 kubelet[3729]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:38:39 embed-certs-992669 kubelet[3729]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:38:39 embed-certs-992669 kubelet[3729]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:38:41 embed-certs-992669 kubelet[3729]: E0314 19:38:41.965814    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:38:55 embed-certs-992669 kubelet[3729]: E0314 19:38:55.965089    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	
	
	==> storage-provisioner [86e468421ed202cd300bb4d6659e1cd2405ae7fc04ad047d6df03203c2c35779] <==
	I0314 19:29:54.534206       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 19:29:54.547814       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 19:29:54.547972       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 19:29:54.560652       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 19:29:54.560918       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-992669_e18a50a3-da21-43bf-893f-e064e73539dd!
	I0314 19:29:54.562209       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d97ee26-9a5f-4277-9344-1b74430be063", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-992669_e18a50a3-da21-43bf-893f-e064e73539dd became leader
	I0314 19:29:54.662018       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-992669_e18a50a3-da21-43bf-893f-e064e73539dd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-992669 -n embed-certs-992669
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-992669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-kr2n6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-992669 describe pod metrics-server-57f55c9bc5-kr2n6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-992669 describe pod metrics-server-57f55c9bc5-kr2n6: exit status 1 (68.251381ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kr2n6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-992669 describe pod metrics-server-57f55c9bc5-kr2n6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/UserAppExistsAfterStop (544.36s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.28s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-731976 -n no-preload-731976
start_stop_delete_test.go:274: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-14 19:39:29.690017091 +0000 UTC m=+5696.175875985
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-731976 -n no-preload-731976
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-731976 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-731976 logs -n 25: (2.114520079s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-578974 sudo                            | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-578974                                 | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:14 UTC |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-097195                           | kubernetes-upgrade-097195    | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:15 UTC |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-993602 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | disable-driver-mounts-993602                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:18 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-731976             | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-992669            | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC | 14 Mar 24 19:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-440341  | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC | 14 Mar 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-968094        | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-731976                  | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-992669                 | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-968094             | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-440341       | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:21 UTC | 14 Mar 24 19:30 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:21:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:21:06.641191  992563 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:21:06.641325  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641335  992563 out.go:304] Setting ErrFile to fd 2...
	I0314 19:21:06.641339  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641562  992563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:21:06.642133  992563 out.go:298] Setting JSON to false
	I0314 19:21:06.643097  992563 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":97419,"bootTime":1710346648,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:21:06.643154  992563 start.go:139] virtualization: kvm guest
	I0314 19:21:06.645619  992563 out.go:177] * [default-k8s-diff-port-440341] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:21:06.646948  992563 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:21:06.646951  992563 notify.go:220] Checking for updates...
	I0314 19:21:06.648183  992563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:21:06.649479  992563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:21:06.650646  992563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:21:06.651793  992563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:21:06.652871  992563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:21:06.654306  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:21:06.654679  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.654715  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.669822  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0314 19:21:06.670226  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.670730  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.670752  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.671113  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.671298  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.671562  992563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:21:06.671894  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.671955  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.686096  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
	I0314 19:21:06.686486  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.686930  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.686950  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.687304  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.687516  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.719775  992563 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 19:21:06.721100  992563 start.go:297] selected driver: kvm2
	I0314 19:21:06.721112  992563 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.721237  992563 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:21:06.722206  992563 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.722303  992563 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:21:06.737068  992563 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:21:06.737396  992563 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:21:06.737423  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:21:06.737430  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:21:06.737470  992563 start.go:340] cluster config:
	{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.737562  992563 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.739290  992563 out.go:177] * Starting "default-k8s-diff-port-440341" primary control-plane node in "default-k8s-diff-port-440341" cluster
	I0314 19:21:06.456441  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:06.740612  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:21:06.740639  992563 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 19:21:06.740649  992563 cache.go:56] Caching tarball of preloaded images
	I0314 19:21:06.740716  992563 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:21:06.740727  992563 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 19:21:06.740828  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:21:06.741044  992563 start.go:360] acquireMachinesLock for default-k8s-diff-port-440341: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:21:09.528474  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:15.608487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:18.680465  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:24.760483  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:27.832487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:33.912460  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:36.984446  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:43.064437  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:46.136461  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:52.216505  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:55.288457  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:01.368528  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:04.440444  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:10.520511  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:13.592559  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:19.672501  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:22.744517  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:28.824450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:31.896452  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:37.976513  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:41.048520  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:47.128498  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:50.200540  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:56.280558  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:59.352482  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:05.432488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:08.504481  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:14.584488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:17.656515  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:23.736418  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:26.808447  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:32.888521  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:35.960649  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:42.040524  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:45.112450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:51.192455  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:54.264715  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:00.344497  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:03.416432  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:06.421344  992056 start.go:364] duration metric: took 4m13.372196869s to acquireMachinesLock for "embed-certs-992669"
	I0314 19:24:06.421482  992056 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:06.421491  992056 fix.go:54] fixHost starting: 
	I0314 19:24:06.421996  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:06.422035  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:06.437799  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0314 19:24:06.438270  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:06.438847  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:24:06.438870  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:06.439255  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:06.439520  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:06.439648  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:24:06.441355  992056 fix.go:112] recreateIfNeeded on embed-certs-992669: state=Stopped err=<nil>
	I0314 19:24:06.441396  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	W0314 19:24:06.441578  992056 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:06.443265  992056 out.go:177] * Restarting existing kvm2 VM for "embed-certs-992669" ...
	I0314 19:24:06.444639  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Start
	I0314 19:24:06.444811  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring networks are active...
	I0314 19:24:06.445562  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network default is active
	I0314 19:24:06.445907  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network mk-embed-certs-992669 is active
	I0314 19:24:06.446291  992056 main.go:141] libmachine: (embed-certs-992669) Getting domain xml...
	I0314 19:24:06.446865  992056 main.go:141] libmachine: (embed-certs-992669) Creating domain...
	I0314 19:24:07.655936  992056 main.go:141] libmachine: (embed-certs-992669) Waiting to get IP...
	I0314 19:24:07.657162  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.657691  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.657795  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.657671  993021 retry.go:31] will retry after 279.188222ms: waiting for machine to come up
	I0314 19:24:07.938384  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.938890  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.938914  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.938842  993021 retry.go:31] will retry after 362.619543ms: waiting for machine to come up
	I0314 19:24:06.418272  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:06.418393  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.418709  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:24:06.418745  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.419028  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:24:06.421200  991880 machine.go:97] duration metric: took 4m37.410478688s to provisionDockerMachine
	I0314 19:24:06.421248  991880 fix.go:56] duration metric: took 4m37.431639776s for fixHost
	I0314 19:24:06.421257  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 4m37.431664509s
	W0314 19:24:06.421278  991880 start.go:713] error starting host: provision: host is not running
	W0314 19:24:06.421471  991880 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 19:24:06.421480  991880 start.go:728] Will try again in 5 seconds ...
	I0314 19:24:08.303564  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.304022  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.304049  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.303988  993021 retry.go:31] will retry after 299.406141ms: waiting for machine to come up
	I0314 19:24:08.605486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.605955  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.605983  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.605903  993021 retry.go:31] will retry after 438.174832ms: waiting for machine to come up
	I0314 19:24:09.045423  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.045943  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.045985  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.045874  993021 retry.go:31] will retry after 484.342881ms: waiting for machine to come up
	I0314 19:24:09.531525  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.531992  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.532032  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.531943  993021 retry.go:31] will retry after 680.030854ms: waiting for machine to come up
	I0314 19:24:10.213303  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:10.213760  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:10.213787  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:10.213714  993021 retry.go:31] will retry after 1.051377672s: waiting for machine to come up
	I0314 19:24:11.267112  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:11.267711  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:11.267736  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:11.267647  993021 retry.go:31] will retry after 1.45882013s: waiting for machine to come up
	I0314 19:24:12.729033  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:12.729529  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:12.729565  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:12.729476  993021 retry.go:31] will retry after 1.6586819s: waiting for machine to come up
	I0314 19:24:11.423018  991880 start.go:360] acquireMachinesLock for no-preload-731976: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:24:14.390304  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:14.390783  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:14.390813  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:14.390731  993021 retry.go:31] will retry after 1.484880543s: waiting for machine to come up
	I0314 19:24:15.877389  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:15.877877  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:15.877907  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:15.877817  993021 retry.go:31] will retry after 2.524223695s: waiting for machine to come up
	I0314 19:24:18.405110  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:18.405486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:18.405517  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:18.405433  993021 retry.go:31] will retry after 3.354970224s: waiting for machine to come up
	I0314 19:24:21.761886  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:21.762325  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:21.762374  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:21.762285  993021 retry.go:31] will retry after 3.996500899s: waiting for machine to come up
	I0314 19:24:27.129245  992344 start.go:364] duration metric: took 3m53.310661355s to acquireMachinesLock for "old-k8s-version-968094"
	I0314 19:24:27.129312  992344 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:27.129324  992344 fix.go:54] fixHost starting: 
	I0314 19:24:27.129726  992344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:27.129761  992344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:27.150444  992344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0314 19:24:27.150921  992344 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:27.151429  992344 main.go:141] libmachine: Using API Version  1
	I0314 19:24:27.151453  992344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:27.151859  992344 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:27.152058  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:27.152265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetState
	I0314 19:24:27.153847  992344 fix.go:112] recreateIfNeeded on old-k8s-version-968094: state=Stopped err=<nil>
	I0314 19:24:27.153876  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	W0314 19:24:27.154051  992344 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:27.156243  992344 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-968094" ...
	I0314 19:24:25.763430  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.763935  992056 main.go:141] libmachine: (embed-certs-992669) Found IP for machine: 192.168.50.213
	I0314 19:24:25.763962  992056 main.go:141] libmachine: (embed-certs-992669) Reserving static IP address...
	I0314 19:24:25.763974  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has current primary IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.764419  992056 main.go:141] libmachine: (embed-certs-992669) Reserved static IP address: 192.168.50.213
	I0314 19:24:25.764444  992056 main.go:141] libmachine: (embed-certs-992669) Waiting for SSH to be available...
	I0314 19:24:25.764467  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.764546  992056 main.go:141] libmachine: (embed-certs-992669) DBG | skip adding static IP to network mk-embed-certs-992669 - found existing host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"}
	I0314 19:24:25.764568  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Getting to WaitForSSH function...
	I0314 19:24:25.766675  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767018  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.767048  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767190  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH client type: external
	I0314 19:24:25.767237  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa (-rw-------)
	I0314 19:24:25.767278  992056 main.go:141] libmachine: (embed-certs-992669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:25.767299  992056 main.go:141] libmachine: (embed-certs-992669) DBG | About to run SSH command:
	I0314 19:24:25.767312  992056 main.go:141] libmachine: (embed-certs-992669) DBG | exit 0
	I0314 19:24:25.892385  992056 main.go:141] libmachine: (embed-certs-992669) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:25.892837  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetConfigRaw
	I0314 19:24:25.893525  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:25.895998  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896372  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.896411  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896708  992056 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/config.json ...
	I0314 19:24:25.896897  992056 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:25.896917  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:25.897155  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:25.899572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899856  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.899882  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:25.900241  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900594  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:25.900763  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:25.901166  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:25.901185  992056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:26.013286  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:26.013326  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013609  992056 buildroot.go:166] provisioning hostname "embed-certs-992669"
	I0314 19:24:26.013640  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013843  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.016614  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017006  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.017041  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.017397  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017596  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017746  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.017903  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.018131  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.018152  992056 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-992669 && echo "embed-certs-992669" | sudo tee /etc/hostname
	I0314 19:24:26.143977  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-992669
	
	I0314 19:24:26.144009  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.146661  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147021  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.147052  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147182  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.147387  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147542  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147677  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.147856  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.148037  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.148053  992056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-992669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-992669/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-992669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:26.266363  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:26.266400  992056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:26.266421  992056 buildroot.go:174] setting up certificates
	I0314 19:24:26.266430  992056 provision.go:84] configureAuth start
	I0314 19:24:26.266439  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.266755  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:26.269450  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269803  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.269833  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.272179  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272519  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.272572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272709  992056 provision.go:143] copyHostCerts
	I0314 19:24:26.272812  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:26.272823  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:26.272892  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:26.272992  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:26.273007  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:26.273034  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:26.273086  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:26.273093  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:26.273113  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:26.273199  992056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.embed-certs-992669 san=[127.0.0.1 192.168.50.213 embed-certs-992669 localhost minikube]
	I0314 19:24:26.424098  992056 provision.go:177] copyRemoteCerts
	I0314 19:24:26.424165  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:26.424193  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.426870  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427216  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.427293  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427367  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.427559  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.427745  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.427889  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.514935  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:26.542295  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0314 19:24:26.568557  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:24:26.595238  992056 provision.go:87] duration metric: took 328.794871ms to configureAuth
	I0314 19:24:26.595266  992056 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:26.595465  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:24:26.595587  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.598447  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598776  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.598810  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598958  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.599149  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599341  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599446  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.599576  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.599763  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.599784  992056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:26.883323  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:26.883363  992056 machine.go:97] duration metric: took 986.450882ms to provisionDockerMachine
	I0314 19:24:26.883378  992056 start.go:293] postStartSetup for "embed-certs-992669" (driver="kvm2")
	I0314 19:24:26.883393  992056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:26.883425  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:26.883799  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:26.883840  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.886707  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887088  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.887121  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887271  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.887471  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.887685  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.887842  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.972276  992056 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:26.977397  992056 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:26.977452  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:26.977557  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:26.977660  992056 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:26.977771  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:26.989997  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:27.015656  992056 start.go:296] duration metric: took 132.26294ms for postStartSetup
	I0314 19:24:27.015701  992056 fix.go:56] duration metric: took 20.594210437s for fixHost
	I0314 19:24:27.015723  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.018428  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018779  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.018820  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018934  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.019141  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019322  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019477  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.019663  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:27.019904  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:27.019918  992056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:27.129041  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444267.099898940
	
	I0314 19:24:27.129062  992056 fix.go:216] guest clock: 1710444267.099898940
	I0314 19:24:27.129070  992056 fix.go:229] Guest: 2024-03-14 19:24:27.09989894 +0000 UTC Remote: 2024-03-14 19:24:27.015704928 +0000 UTC m=+274.119026995 (delta=84.194012ms)
	I0314 19:24:27.129129  992056 fix.go:200] guest clock delta is within tolerance: 84.194012ms
	I0314 19:24:27.129134  992056 start.go:83] releasing machines lock for "embed-certs-992669", held for 20.707742604s
	I0314 19:24:27.129165  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.129445  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:27.132300  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132666  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.132696  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132891  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133513  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133729  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133832  992056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:27.133885  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.133989  992056 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:27.134020  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.136789  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137077  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137149  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137173  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137340  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.137694  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.137732  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137870  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.138017  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.138177  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.138423  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.241866  992056 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:27.248597  992056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:27.398034  992056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:27.404793  992056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:27.404866  992056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:27.425321  992056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:27.425347  992056 start.go:494] detecting cgroup driver to use...
	I0314 19:24:27.425441  992056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:27.446847  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:27.463193  992056 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:27.463248  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:27.477995  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:27.494158  992056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:27.626812  992056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:27.788432  992056 docker.go:233] disabling docker service ...
	I0314 19:24:27.788504  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:27.805552  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:27.820563  992056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:27.961941  992056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:28.083364  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:28.099491  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:28.121026  992056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:24:28.121100  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.133361  992056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:28.133445  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.145489  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.158112  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.171221  992056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:28.184604  992056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:28.196001  992056 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:28.196052  992056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:28.212800  992056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:28.225099  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:28.353741  992056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:28.497018  992056 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:28.497123  992056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:28.502406  992056 start.go:562] Will wait 60s for crictl version
	I0314 19:24:28.502464  992056 ssh_runner.go:195] Run: which crictl
	I0314 19:24:28.506848  992056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:28.546552  992056 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:28.546640  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.580646  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.613244  992056 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:24:27.157735  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .Start
	I0314 19:24:27.157923  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring networks are active...
	I0314 19:24:27.158602  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network default is active
	I0314 19:24:27.158940  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network mk-old-k8s-version-968094 is active
	I0314 19:24:27.159464  992344 main.go:141] libmachine: (old-k8s-version-968094) Getting domain xml...
	I0314 19:24:27.160230  992344 main.go:141] libmachine: (old-k8s-version-968094) Creating domain...
	I0314 19:24:28.397890  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting to get IP...
	I0314 19:24:28.398964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.399389  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.399455  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.399366  993151 retry.go:31] will retry after 254.808358ms: waiting for machine to come up
	I0314 19:24:28.655922  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.656383  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.656414  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.656329  993151 retry.go:31] will retry after 305.278558ms: waiting for machine to come up
	I0314 19:24:28.614866  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:28.618114  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618550  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:28.618595  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618875  992056 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:28.623905  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:28.637729  992056 kubeadm.go:877] updating cluster {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:28.637900  992056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:24:28.637976  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:28.679943  992056 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:24:28.680020  992056 ssh_runner.go:195] Run: which lz4
	I0314 19:24:28.684879  992056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:28.689966  992056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:28.690002  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:24:30.647436  992056 crio.go:444] duration metric: took 1.962590984s to copy over tarball
	I0314 19:24:30.647522  992056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:28.963796  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.964329  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.964360  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.964283  993151 retry.go:31] will retry after 405.241077ms: waiting for machine to come up
	I0314 19:24:29.371107  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.371677  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.371724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.371634  993151 retry.go:31] will retry after 392.618577ms: waiting for machine to come up
	I0314 19:24:29.766406  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.766893  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.766916  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.766848  993151 retry.go:31] will retry after 540.221203ms: waiting for machine to come up
	I0314 19:24:30.308703  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:30.309134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:30.309165  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:30.309075  993151 retry.go:31] will retry after 919.467685ms: waiting for machine to come up
	I0314 19:24:31.230536  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:31.231022  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:31.231055  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:31.230955  993151 retry.go:31] will retry after 1.096403831s: waiting for machine to come up
	I0314 19:24:32.329625  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:32.330123  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:32.330150  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:32.330079  993151 retry.go:31] will retry after 959.221478ms: waiting for machine to come up
	I0314 19:24:33.291448  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:33.291863  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:33.291896  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:33.291811  993151 retry.go:31] will retry after 1.719262878s: waiting for machine to come up
	I0314 19:24:33.418411  992056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.770860454s)
	I0314 19:24:33.418444  992056 crio.go:451] duration metric: took 2.770963996s to extract the tarball
	I0314 19:24:33.418458  992056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:33.461358  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:33.512360  992056 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:24:33.512392  992056 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:24:33.512403  992056 kubeadm.go:928] updating node { 192.168.50.213 8443 v1.28.4 crio true true} ...
	I0314 19:24:33.512647  992056 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-992669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:33.512740  992056 ssh_runner.go:195] Run: crio config
	I0314 19:24:33.572013  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:33.572042  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:33.572058  992056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:33.572089  992056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-992669 NodeName:embed-certs-992669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:24:33.572310  992056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-992669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:33.572391  992056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:24:33.583442  992056 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:33.583514  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:33.593833  992056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0314 19:24:33.611517  992056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:33.630287  992056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0314 19:24:33.649961  992056 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:33.654803  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:33.669018  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:33.787097  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:33.806023  992056 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669 for IP: 192.168.50.213
	I0314 19:24:33.806049  992056 certs.go:194] generating shared ca certs ...
	I0314 19:24:33.806076  992056 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:33.806256  992056 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:33.806310  992056 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:33.806325  992056 certs.go:256] generating profile certs ...
	I0314 19:24:33.806434  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/client.key
	I0314 19:24:33.806536  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key.c0728cf7
	I0314 19:24:33.806597  992056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key
	I0314 19:24:33.806759  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:33.806801  992056 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:33.806815  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:33.806850  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:33.806890  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:33.806919  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:33.806982  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:33.807845  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:33.856253  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:33.912784  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:33.954957  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:33.993293  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 19:24:34.037089  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0314 19:24:34.064883  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:34.091958  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:34.118801  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:24:34.145200  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:24:34.177627  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:34.205768  992056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:24:34.228516  992056 ssh_runner.go:195] Run: openssl version
	I0314 19:24:34.236753  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:24:34.251464  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257801  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257854  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.264945  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:24:34.277068  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:24:34.289085  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294602  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294670  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.301147  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:24:34.313131  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:24:34.324658  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329681  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329741  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.336033  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:24:34.347545  992056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:24:34.352395  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:24:34.358770  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:24:34.364979  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:24:34.371983  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:24:34.378320  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:24:34.385155  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:24:34.392023  992056 kubeadm.go:391] StartCluster: {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:24:34.392123  992056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:24:34.392163  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.431071  992056 cri.go:89] found id: ""
	I0314 19:24:34.431146  992056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:24:34.442517  992056 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:24:34.442537  992056 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:24:34.442543  992056 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:24:34.442591  992056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:24:34.452897  992056 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:24:34.453878  992056 kubeconfig.go:125] found "embed-certs-992669" server: "https://192.168.50.213:8443"
	I0314 19:24:34.456056  992056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:24:34.466222  992056 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.213
	I0314 19:24:34.466280  992056 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:24:34.466297  992056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:24:34.466350  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.514040  992056 cri.go:89] found id: ""
	I0314 19:24:34.514150  992056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:24:34.532904  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:24:34.543553  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:24:34.543572  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:24:34.543621  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:24:34.553476  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:24:34.553537  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:24:34.564032  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:24:34.573782  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:24:34.573880  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:24:34.584510  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.595906  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:24:34.595970  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.610866  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:24:34.623752  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:24:34.623808  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:24:34.634364  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:24:34.645735  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:34.774124  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.518494  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.777109  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.873101  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.991242  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:24:35.991340  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.491712  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.991589  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:37.035324  992056 api_server.go:72] duration metric: took 1.044079871s to wait for apiserver process to appear ...
	I0314 19:24:37.035360  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:24:37.035414  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:37.036045  992056 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0314 19:24:37.535727  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:35.013374  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:35.013750  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:35.013781  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:35.013702  993151 retry.go:31] will retry after 1.413824554s: waiting for machine to come up
	I0314 19:24:36.429118  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:36.429704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:36.429738  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:36.429643  993151 retry.go:31] will retry after 2.349477476s: waiting for machine to come up
	I0314 19:24:40.106309  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.106348  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.106381  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.155310  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.155352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.535833  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.544840  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:40.544869  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.036483  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.049323  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:41.049352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.536465  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.542411  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:24:41.550034  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:24:41.550066  992056 api_server.go:131] duration metric: took 4.514697227s to wait for apiserver health ...
	I0314 19:24:41.550078  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:41.550086  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:41.551967  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:24:41.553380  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:24:41.564892  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:24:41.585838  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:24:41.600993  992056 system_pods.go:59] 8 kube-system pods found
	I0314 19:24:41.601025  992056 system_pods.go:61] "coredns-5dd5756b68-jpsr6" [80728635-786f-442e-80be-811e3292128b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:24:41.601037  992056 system_pods.go:61] "etcd-embed-certs-992669" [4bd7ff48-fe02-4b55-b1f5-cf195efae581] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:24:41.601043  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [2a5f81e9-4943-47d9-a705-e91b802bd506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:24:41.601052  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [50904b48-cbc6-494c-8ed0-ef558f20513c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:24:41.601057  992056 system_pods.go:61] "kube-proxy-nsgs6" [d26d8d3f-04ca-4f68-9016-48552bcdc2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 19:24:41.601062  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [bf535a02-78be-44b0-8ebb-338754867930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:24:41.601067  992056 system_pods.go:61] "metrics-server-57f55c9bc5-w8cj6" [398e104c-24c4-45db-94fb-44188cfa85a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:24:41.601071  992056 system_pods.go:61] "storage-provisioner" [66abcc06-9867-4617-afc1-3fa370940f80] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:24:41.601077  992056 system_pods.go:74] duration metric: took 15.215121ms to wait for pod list to return data ...
	I0314 19:24:41.601085  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:24:41.606110  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:24:41.606135  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:24:41.606146  992056 node_conditions.go:105] duration metric: took 5.056699ms to run NodePressure ...
	I0314 19:24:41.606163  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:41.842508  992056 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850325  992056 kubeadm.go:733] kubelet initialised
	I0314 19:24:41.850344  992056 kubeadm.go:734] duration metric: took 7.804586ms waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850352  992056 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:24:41.857067  992056 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.861933  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861954  992056 pod_ready.go:81] duration metric: took 4.862588ms for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.861963  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861971  992056 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.869015  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869044  992056 pod_ready.go:81] duration metric: took 7.059854ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.869055  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869063  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.877475  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877499  992056 pod_ready.go:81] duration metric: took 8.426466ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.877517  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877525  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.989806  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989836  992056 pod_ready.go:81] duration metric: took 112.302268ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.989846  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989852  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390883  992056 pod_ready.go:92] pod "kube-proxy-nsgs6" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:42.390916  992056 pod_ready.go:81] duration metric: took 401.05393ms for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390929  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:38.781555  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:38.782105  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:38.782134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:38.782060  993151 retry.go:31] will retry after 3.062702235s: waiting for machine to come up
	I0314 19:24:41.846373  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:41.846889  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:41.846928  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:41.846822  993151 retry.go:31] will retry after 3.245094913s: waiting for machine to come up
	I0314 19:24:44.397857  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:46.400091  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:45.093425  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:45.093821  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:45.093848  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:45.093766  993151 retry.go:31] will retry after 4.695140566s: waiting for machine to come up
	I0314 19:24:51.181742  992563 start.go:364] duration metric: took 3m44.440656871s to acquireMachinesLock for "default-k8s-diff-port-440341"
	I0314 19:24:51.181827  992563 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:51.181839  992563 fix.go:54] fixHost starting: 
	I0314 19:24:51.182279  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:51.182325  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:51.202636  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0314 19:24:51.203153  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:51.203703  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:24:51.203732  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:51.204197  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:51.204404  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:24:51.204622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:24:51.206147  992563 fix.go:112] recreateIfNeeded on default-k8s-diff-port-440341: state=Stopped err=<nil>
	I0314 19:24:51.206184  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	W0314 19:24:51.206365  992563 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:51.208359  992563 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-440341" ...
	I0314 19:24:51.209719  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Start
	I0314 19:24:51.209912  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring networks are active...
	I0314 19:24:51.210618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network default is active
	I0314 19:24:51.210996  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network mk-default-k8s-diff-port-440341 is active
	I0314 19:24:51.211386  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Getting domain xml...
	I0314 19:24:51.212126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Creating domain...
	I0314 19:24:49.791977  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792478  992344 main.go:141] libmachine: (old-k8s-version-968094) Found IP for machine: 192.168.72.211
	I0314 19:24:49.792509  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has current primary IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792519  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserving static IP address...
	I0314 19:24:49.792964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.792995  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserved static IP address: 192.168.72.211
	I0314 19:24:49.793028  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | skip adding static IP to network mk-old-k8s-version-968094 - found existing host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"}
	I0314 19:24:49.793049  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Getting to WaitForSSH function...
	I0314 19:24:49.793060  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting for SSH to be available...
	I0314 19:24:49.795809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796119  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.796155  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796340  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH client type: external
	I0314 19:24:49.796365  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa (-rw-------)
	I0314 19:24:49.796399  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:49.796418  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | About to run SSH command:
	I0314 19:24:49.796437  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | exit 0
	I0314 19:24:49.928364  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:49.928849  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:24:49.929565  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:49.932065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932543  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.932575  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932818  992344 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json ...
	I0314 19:24:49.933027  992344 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:49.933049  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:49.933280  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:49.935870  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936260  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.936292  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936447  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:49.936649  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936821  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936940  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:49.937112  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:49.937318  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:49.937331  992344 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:50.053144  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:50.053184  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053461  992344 buildroot.go:166] provisioning hostname "old-k8s-version-968094"
	I0314 19:24:50.053495  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053715  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.056663  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057034  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.057061  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.057486  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057647  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057775  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.057990  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.058167  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.058181  992344 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-968094 && echo "old-k8s-version-968094" | sudo tee /etc/hostname
	I0314 19:24:50.190002  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-968094
	
	I0314 19:24:50.190030  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.192892  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193306  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.193343  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.193825  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194002  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194128  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.194298  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.194472  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.194493  992344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-968094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-968094/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-968094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:50.322939  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:50.322975  992344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:50.323003  992344 buildroot.go:174] setting up certificates
	I0314 19:24:50.323016  992344 provision.go:84] configureAuth start
	I0314 19:24:50.323026  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.323344  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:50.326376  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.326798  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.326827  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.327082  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.329704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.329994  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.330026  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.330131  992344 provision.go:143] copyHostCerts
	I0314 19:24:50.330206  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:50.330223  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:50.330299  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:50.330426  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:50.330435  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:50.330472  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:50.330549  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:50.330560  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:50.330584  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:50.330649  992344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-968094 san=[127.0.0.1 192.168.72.211 localhost minikube old-k8s-version-968094]
	I0314 19:24:50.471374  992344 provision.go:177] copyRemoteCerts
	I0314 19:24:50.471438  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:50.471469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.474223  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474570  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.474608  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474773  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.474969  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.475149  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.475261  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:50.563859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:24:50.593259  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:50.624146  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 19:24:50.651113  992344 provision.go:87] duration metric: took 328.081801ms to configureAuth
	I0314 19:24:50.651158  992344 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:50.651348  992344 config.go:182] Loaded profile config "old-k8s-version-968094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:24:50.651445  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.654716  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.655096  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655328  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.655552  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655730  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655870  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.656012  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.656191  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.656223  992344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:50.925456  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:50.925492  992344 machine.go:97] duration metric: took 992.449429ms to provisionDockerMachine
	I0314 19:24:50.925508  992344 start.go:293] postStartSetup for "old-k8s-version-968094" (driver="kvm2")
	I0314 19:24:50.925518  992344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:50.925535  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:50.925909  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:50.925957  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.928724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929100  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.929124  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929292  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.929469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.929606  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.929718  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.020664  992344 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:51.025418  992344 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:51.025449  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:51.025530  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:51.025642  992344 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:51.025732  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:51.036808  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:51.062597  992344 start.go:296] duration metric: took 137.076655ms for postStartSetup
	I0314 19:24:51.062641  992344 fix.go:56] duration metric: took 23.933315476s for fixHost
	I0314 19:24:51.062667  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.065408  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.065766  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.065809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.066008  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.066241  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066426  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.066751  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:51.066923  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:51.066934  992344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:51.181564  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444291.127685902
	
	I0314 19:24:51.181593  992344 fix.go:216] guest clock: 1710444291.127685902
	I0314 19:24:51.181604  992344 fix.go:229] Guest: 2024-03-14 19:24:51.127685902 +0000 UTC Remote: 2024-03-14 19:24:51.062645814 +0000 UTC m=+257.398231189 (delta=65.040088ms)
	I0314 19:24:51.181630  992344 fix.go:200] guest clock delta is within tolerance: 65.040088ms
	I0314 19:24:51.181636  992344 start.go:83] releasing machines lock for "old-k8s-version-968094", held for 24.052354261s
	I0314 19:24:51.181662  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.181979  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:51.185086  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185444  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.185482  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185683  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186150  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186369  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186475  992344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:51.186530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.186600  992344 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:51.186628  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.189328  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189665  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189739  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.189769  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189909  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190069  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.190091  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.190096  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190278  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190372  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190419  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.190530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190693  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190870  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.273691  992344 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:51.304581  992344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:51.462596  992344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:51.469505  992344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:51.469580  992344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:51.488042  992344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:51.488064  992344 start.go:494] detecting cgroup driver to use...
	I0314 19:24:51.488127  992344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:51.506331  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:51.521263  992344 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:51.521310  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:51.535346  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:51.554784  992344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:51.695072  992344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:51.861752  992344 docker.go:233] disabling docker service ...
	I0314 19:24:51.861822  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:51.886279  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:51.908899  992344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:52.059911  992344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:52.216861  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:52.236554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:52.262549  992344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 19:24:52.262629  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.277311  992344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:52.277405  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.292485  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.307327  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.323517  992344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:52.337431  992344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:52.350647  992344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:52.350744  992344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:52.371679  992344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:52.384810  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:52.540285  992344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:52.710717  992344 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:52.710812  992344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:52.716025  992344 start.go:562] Will wait 60s for crictl version
	I0314 19:24:52.716079  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:52.720670  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:52.760376  992344 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:52.760453  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.795912  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.829365  992344 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 19:24:48.899626  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:50.899777  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:52.830745  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:52.834322  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.834813  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:52.834846  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.835148  992344 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:52.840664  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:52.855935  992344 kubeadm.go:877] updating cluster {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:52.856085  992344 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:24:52.856143  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:52.917316  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:52.917384  992344 ssh_runner.go:195] Run: which lz4
	I0314 19:24:52.923732  992344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:52.929018  992344 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:52.929045  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 19:24:52.555382  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting to get IP...
	I0314 19:24:52.556296  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556767  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556831  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.556746  993275 retry.go:31] will retry after 250.179074ms: waiting for machine to come up
	I0314 19:24:52.808339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.808989  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.809024  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.808935  993275 retry.go:31] will retry after 257.317639ms: waiting for machine to come up
	I0314 19:24:53.068134  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068762  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.068737  993275 retry.go:31] will retry after 427.477171ms: waiting for machine to come up
	I0314 19:24:53.498274  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498751  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498783  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.498710  993275 retry.go:31] will retry after 414.04038ms: waiting for machine to come up
	I0314 19:24:53.914418  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.914970  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.915003  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.914922  993275 retry.go:31] will retry after 698.808984ms: waiting for machine to come up
	I0314 19:24:54.616167  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616671  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616733  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:54.616625  993275 retry.go:31] will retry after 627.573493ms: waiting for machine to come up
	I0314 19:24:55.245579  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246152  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246193  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:55.246077  993275 retry.go:31] will retry after 827.444645ms: waiting for machine to come up
	I0314 19:24:56.075132  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075586  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:56.075577  993275 retry.go:31] will retry after 1.317575549s: waiting for machine to come up
	I0314 19:24:53.400660  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:55.906584  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:56.899301  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:56.899342  992056 pod_ready.go:81] duration metric: took 14.508394033s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:56.899353  992056 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:55.007168  992344 crio.go:444] duration metric: took 2.08347164s to copy over tarball
	I0314 19:24:55.007258  992344 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:58.484792  992344 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.477465904s)
	I0314 19:24:58.484844  992344 crio.go:451] duration metric: took 3.47764437s to extract the tarball
	I0314 19:24:58.484855  992344 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:58.531628  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:58.586436  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:58.586467  992344 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.586644  992344 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.586686  992344 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.586732  992344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.586795  992344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588701  992344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.588708  992344 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 19:24:58.588712  992344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.588743  992344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.588773  992344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.588717  992344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:57.395510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.395966  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.396012  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:57.395926  993275 retry.go:31] will retry after 1.349742787s: waiting for machine to come up
	I0314 19:24:58.747273  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747764  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:58.747711  993275 retry.go:31] will retry after 1.715984886s: waiting for machine to come up
	I0314 19:25:00.465630  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466197  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:00.466159  993275 retry.go:31] will retry after 2.291989797s: waiting for machine to come up
	I0314 19:24:58.949160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:01.407335  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:58.745061  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.748854  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.755753  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.757595  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.776672  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.785641  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.803868  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 19:24:58.878866  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:59.049142  992344 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 19:24:59.049192  992344 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 19:24:59.049238  992344 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 19:24:59.049245  992344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 19:24:59.049206  992344 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.049275  992344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.049297  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058360  992344 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 19:24:59.058394  992344 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.058429  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058471  992344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 19:24:59.058508  992344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.058530  992344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 19:24:59.058550  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058560  992344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.058580  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058506  992344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 19:24:59.058620  992344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.058668  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.179879  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 19:24:59.179903  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.179964  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.180018  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.180048  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.180057  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.180158  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.353654  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 19:24:59.353726  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 19:24:59.353834  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 19:24:59.353886  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 19:24:59.353951  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 19:24:59.353992  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 19:24:59.356778  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 19:24:59.356828  992344 cache_images.go:92] duration metric: took 770.342451ms to LoadCachedImages
	W0314 19:24:59.356913  992344 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0314 19:24:59.356940  992344 kubeadm.go:928] updating node { 192.168.72.211 8443 v1.20.0 crio true true} ...
	I0314 19:24:59.357079  992344 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-968094 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:59.357158  992344 ssh_runner.go:195] Run: crio config
	I0314 19:24:59.412340  992344 cni.go:84] Creating CNI manager for ""
	I0314 19:24:59.412369  992344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:59.412383  992344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:59.412401  992344 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-968094 NodeName:old-k8s-version-968094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 19:24:59.412538  992344 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-968094"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:59.412599  992344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 19:24:59.424508  992344 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:59.424568  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:59.435744  992344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0314 19:24:59.456291  992344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:59.476542  992344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0314 19:24:59.496114  992344 ssh_runner.go:195] Run: grep 192.168.72.211	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:59.500824  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:59.515178  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:59.658035  992344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:59.677735  992344 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094 for IP: 192.168.72.211
	I0314 19:24:59.677764  992344 certs.go:194] generating shared ca certs ...
	I0314 19:24:59.677788  992344 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:59.677986  992344 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:59.678055  992344 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:59.678073  992344 certs.go:256] generating profile certs ...
	I0314 19:24:59.678209  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key
	I0314 19:24:59.678288  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff
	I0314 19:24:59.678358  992344 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key
	I0314 19:24:59.678538  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:59.678589  992344 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:59.678602  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:59.678684  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:59.678751  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:59.678787  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:59.678858  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:59.679859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:59.720965  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:59.758643  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:59.791205  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:59.832034  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 19:24:59.864634  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:24:59.912167  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:59.941168  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:59.969896  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:59.998999  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:00.029688  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:00.062406  992344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:00.083876  992344 ssh_runner.go:195] Run: openssl version
	I0314 19:25:00.091083  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:00.104196  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110057  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110152  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.117863  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:00.130915  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:00.144184  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149849  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149905  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.156267  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:00.168884  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:00.181228  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186741  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186815  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.193408  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:00.206565  992344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:00.211955  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:00.218803  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:00.226004  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:00.233071  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:00.239998  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:00.246935  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:00.253650  992344 kubeadm.go:391] StartCluster: {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:00.253770  992344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:00.253810  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.296620  992344 cri.go:89] found id: ""
	I0314 19:25:00.296698  992344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:00.308438  992344 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:00.308468  992344 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:00.308474  992344 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:00.308525  992344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:00.319200  992344 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:00.320258  992344 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-968094" does not appear in /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:25:00.320949  992344 kubeconfig.go:62] /home/jenkins/minikube-integration/18384-942544/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-968094" cluster setting kubeconfig missing "old-k8s-version-968094" context setting]
	I0314 19:25:00.321954  992344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:00.323826  992344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:00.334959  992344 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.211
	I0314 19:25:00.334999  992344 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:00.335015  992344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:00.335094  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.382418  992344 cri.go:89] found id: ""
	I0314 19:25:00.382504  992344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:00.400714  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:00.411916  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:00.411941  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:00.412000  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:00.421737  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:00.421786  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:00.431760  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:00.441154  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:00.441196  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:00.450820  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.460234  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:00.460286  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.470870  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:00.480352  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:00.480410  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:00.490282  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:00.500774  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:00.627719  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.640607  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.012840431s)
	I0314 19:25:01.640641  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.916817  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.028420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.119081  992344 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:02.119190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.619675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.119328  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.620344  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.761090  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761739  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:02.761611  993275 retry.go:31] will retry after 3.350017146s: waiting for machine to come up
	I0314 19:25:06.113637  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114139  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:06.114067  993275 retry.go:31] will retry after 2.99017798s: waiting for machine to come up
	I0314 19:25:03.407892  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:05.907001  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:04.120088  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:04.619514  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.119530  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.119991  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.619382  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.119301  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.620072  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.119582  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.619828  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.105563  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106118  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106171  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:09.105987  993275 retry.go:31] will retry after 5.42931998s: waiting for machine to come up
	I0314 19:25:08.406736  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:10.906160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:09.119659  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.619483  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.119624  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.619745  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.120056  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.619647  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.120231  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.619400  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.120340  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.620046  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.061441  991880 start.go:364] duration metric: took 1m4.63836278s to acquireMachinesLock for "no-preload-731976"
	I0314 19:25:16.061504  991880 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:25:16.061513  991880 fix.go:54] fixHost starting: 
	I0314 19:25:16.061978  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:25:16.062021  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:25:16.079752  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0314 19:25:16.080283  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:25:16.080930  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:25:16.080964  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:25:16.081279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:25:16.081477  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:16.081630  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:25:16.083170  991880 fix.go:112] recreateIfNeeded on no-preload-731976: state=Stopped err=<nil>
	I0314 19:25:16.083196  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	W0314 19:25:16.083368  991880 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:25:16.085486  991880 out.go:177] * Restarting existing kvm2 VM for "no-preload-731976" ...
	I0314 19:25:14.539116  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has current primary IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Found IP for machine: 192.168.61.88
	I0314 19:25:14.539650  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserving static IP address...
	I0314 19:25:14.540057  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.540081  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserved static IP address: 192.168.61.88
	I0314 19:25:14.540105  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | skip adding static IP to network mk-default-k8s-diff-port-440341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"}
	I0314 19:25:14.540126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Getting to WaitForSSH function...
	I0314 19:25:14.540172  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for SSH to be available...
	I0314 19:25:14.542249  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542558  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.542594  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542722  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH client type: external
	I0314 19:25:14.542755  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa (-rw-------)
	I0314 19:25:14.542793  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:14.542810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | About to run SSH command:
	I0314 19:25:14.542841  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | exit 0
	I0314 19:25:14.668392  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:14.668820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetConfigRaw
	I0314 19:25:14.669583  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:14.672181  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672581  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.672622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672861  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:25:14.673049  992563 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:14.673069  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:14.673317  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.675826  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676173  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.676204  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.676547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676702  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.676969  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.677212  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.677229  992563 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:14.780979  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:14.781005  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781243  992563 buildroot.go:166] provisioning hostname "default-k8s-diff-port-440341"
	I0314 19:25:14.781272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781508  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.784454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.784868  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.784897  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.785044  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.785241  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785410  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.785731  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.786010  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.786038  992563 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-440341 && echo "default-k8s-diff-port-440341" | sudo tee /etc/hostname
	I0314 19:25:14.904629  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-440341
	
	I0314 19:25:14.904677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.907677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908043  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.908065  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908308  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.908510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908709  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908895  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.909075  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.909242  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.909260  992563 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-440341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-440341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-440341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:15.027592  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:15.027627  992563 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:15.027663  992563 buildroot.go:174] setting up certificates
	I0314 19:25:15.027676  992563 provision.go:84] configureAuth start
	I0314 19:25:15.027686  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:15.027992  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:15.031259  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031691  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.031723  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031839  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.034341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034690  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.034727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034882  992563 provision.go:143] copyHostCerts
	I0314 19:25:15.034957  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:15.034974  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:15.035032  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:15.035117  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:15.035126  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:15.035150  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:15.035219  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:15.035240  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:15.035276  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:15.035368  992563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-440341 san=[127.0.0.1 192.168.61.88 default-k8s-diff-port-440341 localhost minikube]
	I0314 19:25:15.366505  992563 provision.go:177] copyRemoteCerts
	I0314 19:25:15.366572  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:15.366601  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.369547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.369931  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.369968  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.370178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.370389  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.370559  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.370668  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.451879  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:15.479025  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 19:25:15.505498  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:15.531616  992563 provision.go:87] duration metric: took 503.926667ms to configureAuth
	I0314 19:25:15.531643  992563 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:15.531808  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:25:15.531887  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.534449  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534774  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.534805  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534957  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.535182  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535344  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535479  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.535660  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.535863  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.535895  992563 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:15.820304  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:15.820329  992563 machine.go:97] duration metric: took 1.147267075s to provisionDockerMachine
	I0314 19:25:15.820361  992563 start.go:293] postStartSetup for "default-k8s-diff-port-440341" (driver="kvm2")
	I0314 19:25:15.820373  992563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:15.820407  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:15.820799  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:15.820845  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.823575  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.823941  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.823987  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.824114  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.824357  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.824550  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.824671  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.908341  992563 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:15.913846  992563 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:15.913876  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:15.913955  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:15.914034  992563 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:15.914122  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:15.925105  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:15.954205  992563 start.go:296] duration metric: took 133.827027ms for postStartSetup
	I0314 19:25:15.954258  992563 fix.go:56] duration metric: took 24.772420326s for fixHost
	I0314 19:25:15.954282  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.957262  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957609  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.957635  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957844  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.958095  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.958685  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.958877  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.958890  992563 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:16.061284  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444316.007193080
	
	I0314 19:25:16.061311  992563 fix.go:216] guest clock: 1710444316.007193080
	I0314 19:25:16.061318  992563 fix.go:229] Guest: 2024-03-14 19:25:16.00719308 +0000 UTC Remote: 2024-03-14 19:25:15.954262263 +0000 UTC m=+249.360732976 (delta=52.930817ms)
	I0314 19:25:16.061337  992563 fix.go:200] guest clock delta is within tolerance: 52.930817ms
	I0314 19:25:16.061342  992563 start.go:83] releasing machines lock for "default-k8s-diff-port-440341", held for 24.879556185s
	I0314 19:25:16.061371  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.061696  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:16.064827  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065187  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.065222  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065419  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.065929  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066138  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066251  992563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:16.066313  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.066422  992563 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:16.066451  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.069082  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069202  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069488  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069518  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069624  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069659  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.069727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069881  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.069946  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.070091  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070106  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.070265  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.070283  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070420  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.149620  992563 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:16.178081  992563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:16.329236  992563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:16.337073  992563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:16.337165  992563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:16.364829  992563 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:16.364860  992563 start.go:494] detecting cgroup driver to use...
	I0314 19:25:16.364950  992563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:16.381277  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:16.396677  992563 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:16.396790  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:16.415438  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:16.434001  992563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:16.557750  992563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:16.705623  992563 docker.go:233] disabling docker service ...
	I0314 19:25:16.705722  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:16.724795  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:16.740336  992563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:16.886850  992563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:17.053349  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:17.069592  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:17.094552  992563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:17.094625  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.110947  992563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:17.111007  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.126320  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.146601  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.159826  992563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:17.173155  992563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:17.184494  992563 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:17.184558  992563 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:17.208695  992563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:17.227381  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:17.368355  992563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:17.520886  992563 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:17.520974  992563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:17.526580  992563 start.go:562] Will wait 60s for crictl version
	I0314 19:25:17.526628  992563 ssh_runner.go:195] Run: which crictl
	I0314 19:25:17.531219  992563 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:17.575983  992563 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:17.576094  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.609997  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.649005  992563 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:25:13.406397  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:15.407636  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:17.409791  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:14.119937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:14.619997  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.120018  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.620272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.119409  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.619421  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.120049  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.619392  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.120272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.619832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.086761  991880 main.go:141] libmachine: (no-preload-731976) Calling .Start
	I0314 19:25:16.086939  991880 main.go:141] libmachine: (no-preload-731976) Ensuring networks are active...
	I0314 19:25:16.087657  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network default is active
	I0314 19:25:16.088038  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network mk-no-preload-731976 is active
	I0314 19:25:16.088466  991880 main.go:141] libmachine: (no-preload-731976) Getting domain xml...
	I0314 19:25:16.089244  991880 main.go:141] libmachine: (no-preload-731976) Creating domain...
	I0314 19:25:17.372280  991880 main.go:141] libmachine: (no-preload-731976) Waiting to get IP...
	I0314 19:25:17.373197  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.373612  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.373682  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.373595  993471 retry.go:31] will retry after 247.546207ms: waiting for machine to come up
	I0314 19:25:17.622973  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.623491  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.623521  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.623426  993471 retry.go:31] will retry after 340.11253ms: waiting for machine to come up
	I0314 19:25:17.964912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.965367  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.965409  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.965326  993471 retry.go:31] will retry after 467.934923ms: waiting for machine to come up
	I0314 19:25:18.434872  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.435488  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.435532  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.435428  993471 retry.go:31] will retry after 407.906998ms: waiting for machine to come up
	I0314 19:25:18.845093  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.845593  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.845624  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.845538  993471 retry.go:31] will retry after 461.594471ms: waiting for machine to come up
	I0314 19:25:17.650252  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:17.653280  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:17.653706  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653907  992563 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:17.660311  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:17.676122  992563 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:17.676277  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:25:17.676348  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:17.718920  992563 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:25:17.718999  992563 ssh_runner.go:195] Run: which lz4
	I0314 19:25:17.724064  992563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:25:17.729236  992563 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:25:17.729268  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:25:19.779405  992563 crio.go:444] duration metric: took 2.055391829s to copy over tarball
	I0314 19:25:19.779494  992563 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:25:19.411247  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:21.911525  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:19.120147  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.619419  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.119333  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.620029  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.119402  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.620236  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.119692  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.120125  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.620104  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.309335  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.309861  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.309892  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.309812  993471 retry.go:31] will retry after 629.96532ms: waiting for machine to come up
	I0314 19:25:19.941554  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.942052  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.942086  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.942018  993471 retry.go:31] will retry after 1.025753706s: waiting for machine to come up
	I0314 19:25:20.969178  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:20.969734  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:20.969775  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:20.969671  993471 retry.go:31] will retry after 1.02702661s: waiting for machine to come up
	I0314 19:25:21.998485  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:21.999019  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:21.999054  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:21.998955  993471 retry.go:31] will retry after 1.463514327s: waiting for machine to come up
	I0314 19:25:23.464556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:23.465087  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:23.465123  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:23.465035  993471 retry.go:31] will retry after 2.155372334s: waiting for machine to come up
	I0314 19:25:22.861284  992563 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081750952s)
	I0314 19:25:22.861324  992563 crio.go:451] duration metric: took 3.081885026s to extract the tarball
	I0314 19:25:22.861335  992563 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:25:22.907763  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:22.962568  992563 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:25:22.962593  992563 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:25:22.962602  992563 kubeadm.go:928] updating node { 192.168.61.88 8444 v1.28.4 crio true true} ...
	I0314 19:25:22.962756  992563 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-440341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:22.962851  992563 ssh_runner.go:195] Run: crio config
	I0314 19:25:23.020057  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:23.020092  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:23.020109  992563 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:23.020150  992563 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.88 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-440341 NodeName:default-k8s-diff-port-440341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:23.020354  992563 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.88
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-440341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:23.020441  992563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:25:23.031259  992563 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:23.031351  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:23.041703  992563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0314 19:25:23.061055  992563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:25:23.084905  992563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0314 19:25:23.108282  992563 ssh_runner.go:195] Run: grep 192.168.61.88	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:23.114097  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:23.134147  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:23.261318  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:23.280454  992563 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341 for IP: 192.168.61.88
	I0314 19:25:23.280483  992563 certs.go:194] generating shared ca certs ...
	I0314 19:25:23.280506  992563 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:23.280675  992563 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:23.280739  992563 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:23.280753  992563 certs.go:256] generating profile certs ...
	I0314 19:25:23.280872  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.key
	I0314 19:25:23.280971  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key.a3c32cf7
	I0314 19:25:23.281038  992563 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key
	I0314 19:25:23.281177  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:23.281219  992563 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:23.281232  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:23.281268  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:23.281300  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:23.281333  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:23.281389  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:23.282304  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:23.351284  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:23.402835  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:23.435934  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:23.467188  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 19:25:23.499760  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:23.528544  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:23.556740  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:23.584404  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:23.615693  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:23.643349  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:23.671793  992563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:23.692766  992563 ssh_runner.go:195] Run: openssl version
	I0314 19:25:23.699459  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:23.711735  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717022  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717078  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.723658  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:23.735141  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:23.746833  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753783  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753855  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.760817  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:23.772826  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:23.784241  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789107  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789170  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.795406  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:23.806969  992563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:23.811875  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:23.818337  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:23.826885  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:23.835278  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:23.843419  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:23.851515  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:23.860074  992563 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:23.860169  992563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:23.860241  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.902985  992563 cri.go:89] found id: ""
	I0314 19:25:23.903065  992563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:23.915686  992563 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:23.915711  992563 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:23.915718  992563 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:23.915776  992563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:23.926246  992563 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:23.927336  992563 kubeconfig.go:125] found "default-k8s-diff-port-440341" server: "https://192.168.61.88:8444"
	I0314 19:25:23.929693  992563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:23.940022  992563 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.88
	I0314 19:25:23.940053  992563 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:23.940067  992563 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:23.940135  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.982828  992563 cri.go:89] found id: ""
	I0314 19:25:23.982911  992563 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:24.001146  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:24.014973  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:24.015016  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:24.015069  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:25:24.024883  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:24.024954  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:24.034932  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:25:24.044680  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:24.044737  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:24.054865  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.064375  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:24.064440  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.075503  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:25:24.085139  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:24.085181  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:24.096092  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:24.106907  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.238605  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.990111  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.246192  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.325019  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.458340  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:25.458512  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.959178  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.459441  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.543678  992563 api_server.go:72] duration metric: took 1.085336822s to wait for apiserver process to appear ...
	I0314 19:25:26.543708  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:26.543734  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:26.544332  992563 api_server.go:269] stopped: https://192.168.61.88:8444/healthz: Get "https://192.168.61.88:8444/healthz": dial tcp 192.168.61.88:8444: connect: connection refused
	I0314 19:25:24.407953  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:26.408497  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:24.119417  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:24.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.120173  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.619362  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.119366  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.619644  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.119516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.619418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.120115  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.619593  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.621639  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:25.622189  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:25.622224  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:25.622129  993471 retry.go:31] will retry after 2.47317901s: waiting for machine to come up
	I0314 19:25:28.097250  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:28.097610  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:28.097640  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:28.097554  993471 retry.go:31] will retry after 2.923437953s: waiting for machine to come up
	I0314 19:25:27.044437  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.729256  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.729296  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:29.729321  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.752124  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.752162  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:30.044560  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.049804  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.049846  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:30.544454  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.558197  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.558237  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.043868  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.050468  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:31.050497  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.544657  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.549640  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:25:31.561049  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:25:31.561080  992563 api_server.go:131] duration metric: took 5.017362991s to wait for apiserver health ...
	I0314 19:25:31.561091  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:31.561101  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:31.563012  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:31.564434  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:25:31.594766  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:25:31.618252  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:25:31.632693  992563 system_pods.go:59] 8 kube-system pods found
	I0314 19:25:31.632743  992563 system_pods.go:61] "coredns-5dd5756b68-bkfks" [c4bc8ea9-9a0f-43df-9916-a9a7e42fc4e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:25:31.632752  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [936bfbcb-333a-45db-9cd1-b152c14bc623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:25:31.632758  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [0533b8e8-e66a-4f38-8d55-c813446a4406] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:25:31.632768  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [b31998b9-b575-430b-918e-b9c4a7c626d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:25:31.632773  992563 system_pods.go:61] "kube-proxy-249fd" [f8bafea7-bc78-4e48-ad55-3b913c3e2fd1] Running
	I0314 19:25:31.632778  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [99c2fc5a-61a5-4813-9042-dac771932708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:25:31.632786  992563 system_pods.go:61] "metrics-server-57f55c9bc5-t2hhv" [03b6608b-bea1-4605-b85d-c09f2c744118] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:25:31.632801  992563 system_pods.go:61] "storage-provisioner" [ec6d3122-f9c6-4f14-bc66-7cab18b88fa5] Running
	I0314 19:25:31.632811  992563 system_pods.go:74] duration metric: took 14.536847ms to wait for pod list to return data ...
	I0314 19:25:31.632818  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:25:31.636580  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:25:31.636606  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:25:31.636618  992563 node_conditions.go:105] duration metric: took 3.793367ms to run NodePressure ...
	I0314 19:25:31.636635  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:28.907100  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:30.908031  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:29.119861  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:29.620287  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.120113  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.619452  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.120315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.619667  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.120221  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.120292  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.619449  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.022404  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:31.022914  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:31.022950  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:31.022850  993471 retry.go:31] will retry after 4.138449888s: waiting for machine to come up
	I0314 19:25:31.874889  992563 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879729  992563 kubeadm.go:733] kubelet initialised
	I0314 19:25:31.879757  992563 kubeadm.go:734] duration metric: took 4.834353ms waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879768  992563 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:25:31.884949  992563 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.890443  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890467  992563 pod_ready.go:81] duration metric: took 5.495766ms for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.890475  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890485  992563 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.895241  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895275  992563 pod_ready.go:81] duration metric: took 4.778217ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.895289  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895300  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.900184  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900204  992563 pod_ready.go:81] duration metric: took 4.895049ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.900222  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900228  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.023193  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023224  992563 pod_ready.go:81] duration metric: took 122.987086ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:32.023236  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023242  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423939  992563 pod_ready.go:92] pod "kube-proxy-249fd" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:32.423972  992563 pod_ready.go:81] duration metric: took 400.720648ms for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423988  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:34.431140  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:36.432871  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:33.408652  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.906834  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:37.914444  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.165792  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166342  991880 main.go:141] libmachine: (no-preload-731976) Found IP for machine: 192.168.39.148
	I0314 19:25:35.166372  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has current primary IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166382  991880 main.go:141] libmachine: (no-preload-731976) Reserving static IP address...
	I0314 19:25:35.166707  991880 main.go:141] libmachine: (no-preload-731976) Reserved static IP address: 192.168.39.148
	I0314 19:25:35.166727  991880 main.go:141] libmachine: (no-preload-731976) Waiting for SSH to be available...
	I0314 19:25:35.166748  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.166781  991880 main.go:141] libmachine: (no-preload-731976) DBG | skip adding static IP to network mk-no-preload-731976 - found existing host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"}
	I0314 19:25:35.166800  991880 main.go:141] libmachine: (no-preload-731976) DBG | Getting to WaitForSSH function...
	I0314 19:25:35.169377  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169760  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.169795  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169926  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH client type: external
	I0314 19:25:35.169960  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa (-rw-------)
	I0314 19:25:35.169998  991880 main.go:141] libmachine: (no-preload-731976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:35.170022  991880 main.go:141] libmachine: (no-preload-731976) DBG | About to run SSH command:
	I0314 19:25:35.170036  991880 main.go:141] libmachine: (no-preload-731976) DBG | exit 0
	I0314 19:25:35.296417  991880 main.go:141] libmachine: (no-preload-731976) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:35.296801  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetConfigRaw
	I0314 19:25:35.297596  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.300253  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300720  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.300757  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300996  991880 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/config.json ...
	I0314 19:25:35.301205  991880 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:35.301229  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:35.301493  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.304165  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304600  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.304643  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304850  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.305119  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305292  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305468  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.305627  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.305863  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.305881  991880 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:35.421933  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:35.421969  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422269  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:25:35.422303  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422516  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.425530  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426039  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.426069  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.426476  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426646  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426807  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.426997  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.427179  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.427200  991880 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-731976 && echo "no-preload-731976" | sudo tee /etc/hostname
	I0314 19:25:35.558170  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-731976
	
	I0314 19:25:35.558216  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.561575  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562028  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.562059  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562372  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.562673  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.562874  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.563059  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.563234  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.563468  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.563495  991880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-731976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-731976/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-731976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:35.691282  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:35.691321  991880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:35.691412  991880 buildroot.go:174] setting up certificates
	I0314 19:25:35.691437  991880 provision.go:84] configureAuth start
	I0314 19:25:35.691454  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.691821  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.694807  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695223  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.695255  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695385  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.698118  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698519  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.698548  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698752  991880 provision.go:143] copyHostCerts
	I0314 19:25:35.698834  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:35.698872  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:35.698922  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:35.699019  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:35.699030  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:35.699051  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:35.699114  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:35.699156  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:35.699177  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:35.699240  991880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.no-preload-731976 san=[127.0.0.1 192.168.39.148 localhost minikube no-preload-731976]
	I0314 19:25:35.915177  991880 provision.go:177] copyRemoteCerts
	I0314 19:25:35.915240  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:35.915265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.918112  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918468  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.918499  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918607  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.918813  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.918989  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.919161  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.003712  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 19:25:36.037023  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:36.068063  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:36.101448  991880 provision.go:87] duration metric: took 409.997228ms to configureAuth
	I0314 19:25:36.101475  991880 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:36.101691  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:25:36.101783  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.104700  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.105138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105310  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.105536  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105733  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105885  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.106088  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.106325  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.106345  991880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:36.387809  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:36.387841  991880 machine.go:97] duration metric: took 1.086620225s to provisionDockerMachine
	I0314 19:25:36.387855  991880 start.go:293] postStartSetup for "no-preload-731976" (driver="kvm2")
	I0314 19:25:36.387869  991880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:36.387886  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.388286  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:36.388316  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.391292  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391742  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.391774  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391959  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.392203  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.392450  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.392637  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.477050  991880 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:36.482184  991880 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:36.482205  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:36.482270  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:36.482372  991880 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:36.482459  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:36.492716  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:36.520655  991880 start.go:296] duration metric: took 132.783495ms for postStartSetup
	I0314 19:25:36.520723  991880 fix.go:56] duration metric: took 20.459188473s for fixHost
	I0314 19:25:36.520761  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.523718  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.524138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524431  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.524648  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.524842  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.525031  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.525211  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.525425  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.525436  991880 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:36.633356  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444336.610892497
	
	I0314 19:25:36.633389  991880 fix.go:216] guest clock: 1710444336.610892497
	I0314 19:25:36.633400  991880 fix.go:229] Guest: 2024-03-14 19:25:36.610892497 +0000 UTC Remote: 2024-03-14 19:25:36.520738659 +0000 UTC m=+367.687364006 (delta=90.153838ms)
	I0314 19:25:36.633445  991880 fix.go:200] guest clock delta is within tolerance: 90.153838ms
	I0314 19:25:36.633457  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 20.57197992s
	I0314 19:25:36.633490  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.633778  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:36.636556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.636959  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.636991  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.637190  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637708  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637871  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637934  991880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:36.638009  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.638075  991880 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:36.638104  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.640821  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641207  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641236  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641272  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641489  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.641668  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641863  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641961  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.642002  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.642188  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.642394  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.642606  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.753962  991880 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:36.761020  991880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:36.916046  991880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:36.923607  991880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:36.923688  991880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:36.941685  991880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:36.941710  991880 start.go:494] detecting cgroup driver to use...
	I0314 19:25:36.941776  991880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:36.962019  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:36.977917  991880 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:36.977982  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:36.995378  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:37.010859  991880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:37.145828  991880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:37.310805  991880 docker.go:233] disabling docker service ...
	I0314 19:25:37.310893  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:37.327346  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:37.342143  991880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:37.485925  991880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:37.607814  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:37.623068  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:37.644387  991880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:37.644455  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.655919  991880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:37.655992  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.669290  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.681601  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.694022  991880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:37.705793  991880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:37.716260  991880 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:37.716307  991880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:37.732112  991880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:37.749555  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:37.868548  991880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:38.023735  991880 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:38.023821  991880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:38.029414  991880 start.go:562] Will wait 60s for crictl version
	I0314 19:25:38.029481  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.033985  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:38.077012  991880 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:38.077102  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.109155  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.146003  991880 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 19:25:34.119724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:34.620261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.119543  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.620151  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.119893  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.619442  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.119326  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.619427  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.119766  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.619711  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.147344  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:38.149841  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150180  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:38.150217  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150608  991880 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:38.155598  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:38.172048  991880 kubeadm.go:877] updating cluster {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:38.172187  991880 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 19:25:38.172260  991880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:38.220190  991880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 19:25:38.220232  991880 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:25:38.220291  991880 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.220313  991880 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.220345  991880 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.220378  991880 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 19:25:38.220395  991880 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.220486  991880 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.220484  991880 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.220724  991880 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.221960  991880 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 19:25:38.222035  991880 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.222177  991880 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.222230  991880 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.222271  991880 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.222272  991880 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.372514  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 19:25:38.384051  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.388330  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.395017  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.397902  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.409638  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.431681  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.501339  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.590670  991880 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 19:25:38.590775  991880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 19:25:38.590838  991880 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 19:25:38.590853  991880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.590860  991880 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590797  991880 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.591036  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627618  991880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 19:25:38.627667  991880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.627716  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627732  991880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 19:25:38.627769  991880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.627826  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648107  991880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 19:25:38.648128  991880 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648279  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.648277  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.648335  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.648346  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.648374  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.783957  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 19:25:38.784024  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.784071  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.784097  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.784197  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.788609  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788695  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788719  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 19:25:38.788782  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.788797  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:38.788856  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.788931  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.849452  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 19:25:38.849488  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 19:25:38.849505  991880 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849554  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849563  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:38.849617  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 19:25:38.849624  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.849552  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849645  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849672  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849739  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.854753  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 19:25:38.933788  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.435999  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:39.436021  992563 pod_ready.go:81] duration metric: took 7.012025071s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:39.436031  992563 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:41.445517  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:40.407508  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:42.907630  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.120157  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:39.620116  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.119693  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.120192  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.619323  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.119637  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.619724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.619799  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.952723  991880 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (4.102947708s)
	I0314 19:25:42.952761  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 19:25:42.952762  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103172862s)
	I0314 19:25:42.952791  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 19:25:42.952821  991880 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:42.952878  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:43.943582  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.945997  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.407780  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:44.119609  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:44.619260  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.119599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.619665  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.120008  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.619297  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.119435  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.619512  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.119521  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.619320  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.022375  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.069465444s)
	I0314 19:25:45.022413  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 19:25:45.022458  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:45.022539  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:48.091412  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.068839048s)
	I0314 19:25:48.091449  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 19:25:48.091482  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.091536  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.451322  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.944057  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:47.957506  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.408381  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:52.906494  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:49.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.619796  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.120279  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.619408  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.120076  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.619516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.119566  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.620268  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.120329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.619847  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.657504  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.565934426s)
	I0314 19:25:49.657542  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 19:25:49.657578  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:49.657646  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:52.134720  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477038469s)
	I0314 19:25:52.134760  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 19:25:52.134794  991880 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:52.134888  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:53.095193  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 19:25:53.095258  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.095337  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.447791  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:55.944276  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.907556  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:57.406376  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.119981  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.620180  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.119616  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.619375  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.119240  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.619922  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.120288  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.119329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.620315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.949310  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.853945012s)
	I0314 19:25:54.949346  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 19:25:54.949374  991880 cache_images.go:123] Successfully loaded all cached images
	I0314 19:25:54.949385  991880 cache_images.go:92] duration metric: took 16.729134981s to LoadCachedImages
	I0314 19:25:54.949398  991880 kubeadm.go:928] updating node { 192.168.39.148 8443 v1.29.0-rc.2 crio true true} ...
	I0314 19:25:54.949542  991880 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-731976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:54.949667  991880 ssh_runner.go:195] Run: crio config
	I0314 19:25:55.001838  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:25:55.001869  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:55.001885  991880 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:55.001916  991880 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.148 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-731976 NodeName:no-preload-731976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:55.002121  991880 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-731976"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:55.002212  991880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 19:25:55.014769  991880 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:55.014842  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:55.026082  991880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 19:25:55.049071  991880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 19:25:55.071131  991880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 19:25:55.093566  991880 ssh_runner.go:195] Run: grep 192.168.39.148	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:55.098332  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:55.113424  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:55.260159  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:55.283145  991880 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976 for IP: 192.168.39.148
	I0314 19:25:55.283174  991880 certs.go:194] generating shared ca certs ...
	I0314 19:25:55.283197  991880 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:55.283377  991880 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:55.283441  991880 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:55.283455  991880 certs.go:256] generating profile certs ...
	I0314 19:25:55.283564  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.key
	I0314 19:25:55.283661  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key.5587cb42
	I0314 19:25:55.283720  991880 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key
	I0314 19:25:55.283895  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:55.283948  991880 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:55.283962  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:55.283993  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:55.284031  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:55.284066  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:55.284121  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:55.284976  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:55.326779  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:55.376167  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:55.405828  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:55.458807  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 19:25:55.494051  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:55.531015  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:55.559184  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:55.588905  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:55.616661  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:55.646728  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:55.673995  991880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:55.692276  991880 ssh_runner.go:195] Run: openssl version
	I0314 19:25:55.698918  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:55.711703  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717107  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717177  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.723435  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:55.736575  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:55.749982  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755614  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755680  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.762122  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:55.774447  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:55.786787  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791855  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791901  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.798041  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:55.810324  991880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:55.815698  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:55.822389  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:55.829046  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:55.835660  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:55.843075  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:55.849353  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:55.855678  991880 kubeadm.go:391] StartCluster: {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:55.855799  991880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:55.855834  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.906341  991880 cri.go:89] found id: ""
	I0314 19:25:55.906408  991880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:55.918790  991880 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:55.918819  991880 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:55.918826  991880 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:55.918875  991880 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:55.929988  991880 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:55.931422  991880 kubeconfig.go:125] found "no-preload-731976" server: "https://192.168.39.148:8443"
	I0314 19:25:55.933865  991880 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:55.946711  991880 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.148
	I0314 19:25:55.946743  991880 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:55.946757  991880 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:55.946812  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.998884  991880 cri.go:89] found id: ""
	I0314 19:25:55.998971  991880 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:56.018919  991880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:56.030467  991880 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:56.030497  991880 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:56.030558  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:56.041403  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:56.041465  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:56.052140  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:56.062366  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:56.062420  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:56.075847  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.086246  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:56.086295  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.097148  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:56.106718  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:56.106756  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:56.118337  991880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:56.131893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:56.264399  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.282634  991880 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.018196302s)
	I0314 19:25:57.282664  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.524172  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.626554  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.772151  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:57.772255  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.273445  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.772397  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.803788  991880 api_server.go:72] duration metric: took 1.031637073s to wait for apiserver process to appear ...
	I0314 19:25:58.803816  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:58.803835  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:25:58.804392  991880 api_server.go:269] stopped: https://192.168.39.148:8443/healthz: Get "https://192.168.39.148:8443/healthz": dial tcp 192.168.39.148:8443: connect: connection refused
	I0314 19:25:58.445134  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:00.447429  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.304059  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.588183  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.588231  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.588251  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.632993  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.633030  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.804404  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.862306  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:01.862370  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.304525  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.309902  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:02.309933  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.804296  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.812245  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:26:02.830235  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:26:02.830268  991880 api_server.go:131] duration metric: took 4.026443836s to wait for apiserver health ...
	I0314 19:26:02.830281  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:26:02.830289  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:26:02.832051  991880 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:59.407314  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:01.906570  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.120306  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:59.620183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.119877  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.619283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.119314  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.620175  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:02.120113  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:02.120198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:02.173354  992344 cri.go:89] found id: ""
	I0314 19:26:02.173388  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.173421  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:02.173430  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:02.173509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:02.213519  992344 cri.go:89] found id: ""
	I0314 19:26:02.213555  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.213567  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:02.213574  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:02.213689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:02.259387  992344 cri.go:89] found id: ""
	I0314 19:26:02.259423  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.259435  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:02.259443  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:02.259511  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:02.308335  992344 cri.go:89] found id: ""
	I0314 19:26:02.308362  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.308373  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:02.308381  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:02.308441  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:02.353065  992344 cri.go:89] found id: ""
	I0314 19:26:02.353092  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.353101  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:02.353106  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:02.353183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:02.394305  992344 cri.go:89] found id: ""
	I0314 19:26:02.394342  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.394355  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:02.394365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:02.394443  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:02.441693  992344 cri.go:89] found id: ""
	I0314 19:26:02.441731  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.441743  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:02.441751  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:02.441816  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:02.479786  992344 cri.go:89] found id: ""
	I0314 19:26:02.479810  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.479818  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:02.479827  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:02.479858  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:02.494835  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:02.494865  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:02.660069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:02.660114  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:02.660134  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:02.732148  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:02.732187  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:02.780910  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:02.780942  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:02.833411  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:26:02.852441  991880 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:26:02.875033  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:26:02.891957  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:26:02.892003  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:26:02.892016  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:26:02.892034  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:26:02.892045  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:26:02.892062  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:26:02.892072  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:26:02.892087  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:26:02.892098  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:26:02.892109  991880 system_pods.go:74] duration metric: took 17.053651ms to wait for pod list to return data ...
	I0314 19:26:02.892122  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:26:02.896049  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:26:02.896076  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:26:02.896087  991880 node_conditions.go:105] duration metric: took 3.958558ms to run NodePressure ...
	I0314 19:26:02.896104  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:26:03.183167  991880 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187696  991880 kubeadm.go:733] kubelet initialised
	I0314 19:26:03.187722  991880 kubeadm.go:734] duration metric: took 4.517639ms waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187734  991880 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:03.193263  991880 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.198068  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198100  991880 pod_ready.go:81] duration metric: took 4.803067ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.198112  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198125  991880 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.202418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202440  991880 pod_ready.go:81] duration metric: took 4.299898ms for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.202453  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202458  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.207418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207440  991880 pod_ready.go:81] duration metric: took 4.975588ms for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.207447  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207453  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.278880  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278907  991880 pod_ready.go:81] duration metric: took 71.446692ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.278916  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278922  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.679262  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679298  991880 pod_ready.go:81] duration metric: took 400.3668ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.679308  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679315  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.078953  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.078992  991880 pod_ready.go:81] duration metric: took 399.668454ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.079014  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.079023  991880 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.479041  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479069  991880 pod_ready.go:81] duration metric: took 400.034338ms for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.479078  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479084  991880 pod_ready.go:38] duration metric: took 1.291340313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:04.479109  991880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:26:04.493423  991880 ops.go:34] apiserver oom_adj: -16
	I0314 19:26:04.493444  991880 kubeadm.go:591] duration metric: took 8.574611355s to restartPrimaryControlPlane
	I0314 19:26:04.493451  991880 kubeadm.go:393] duration metric: took 8.63778247s to StartCluster
	I0314 19:26:04.493495  991880 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.493576  991880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:26:04.495275  991880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.495648  991880 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:26:04.497346  991880 out.go:177] * Verifying Kubernetes components...
	I0314 19:26:04.495716  991880 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:26:04.495843  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:26:04.498678  991880 addons.go:69] Setting storage-provisioner=true in profile "no-preload-731976"
	I0314 19:26:04.498694  991880 addons.go:69] Setting metrics-server=true in profile "no-preload-731976"
	I0314 19:26:04.498719  991880 addons.go:234] Setting addon metrics-server=true in "no-preload-731976"
	I0314 19:26:04.498724  991880 addons.go:234] Setting addon storage-provisioner=true in "no-preload-731976"
	W0314 19:26:04.498725  991880 addons.go:243] addon metrics-server should already be in state true
	W0314 19:26:04.498735  991880 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:26:04.498755  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498764  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498685  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:26:04.498684  991880 addons.go:69] Setting default-storageclass=true in profile "no-preload-731976"
	I0314 19:26:04.498902  991880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-731976"
	I0314 19:26:04.499116  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499128  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499146  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499151  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499275  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499306  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.515926  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42749
	I0314 19:26:04.516541  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.517336  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.517380  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.517903  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.518496  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.518530  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.519804  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0314 19:26:04.519877  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
	I0314 19:26:04.520294  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520433  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520844  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.520874  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521224  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.521279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.521294  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521512  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.521839  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.522431  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.522462  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.524933  991880 addons.go:234] Setting addon default-storageclass=true in "no-preload-731976"
	W0314 19:26:04.524953  991880 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:26:04.524977  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.525238  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.525267  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.535073  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I0314 19:26:04.535555  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.536084  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.536110  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.536455  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.536608  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.537991  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0314 19:26:04.538320  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.538560  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.540272  991880 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:26:04.539087  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.541445  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.541556  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:26:04.541574  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:26:04.541590  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.541837  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.542000  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.544178  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.544425  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0314 19:26:04.545832  991880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:26:04.544882  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.544974  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.545887  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.545912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.545529  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.547028  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.547051  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.546085  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.547153  991880 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.547187  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:26:04.547205  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.547263  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.547420  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.547492  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.548137  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.548250  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.549851  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550280  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.550310  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550441  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.550642  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.550806  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.550933  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.594092  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0314 19:26:04.594507  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.595046  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.595068  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.595380  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.595600  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.597532  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.597819  991880 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.597841  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:26:04.597860  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.600392  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600790  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.600822  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600932  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.601112  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.601282  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.601422  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.698561  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:26:04.717893  991880 node_ready.go:35] waiting up to 6m0s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:04.789158  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.874271  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:26:04.874299  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:26:04.897643  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.915424  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:26:04.915447  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:26:04.962912  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:04.962936  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:26:05.037223  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:05.140432  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140464  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.140791  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.140832  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.140858  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140873  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.141237  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.141256  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.141341  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:05.147523  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.147539  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.147796  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.147815  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.021360  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.123678174s)
	I0314 19:26:06.021425  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.021439  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023327  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.023341  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.023364  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023384  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.023398  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023662  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.025042  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023698  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.063870  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026588061s)
	I0314 19:26:06.063950  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.063961  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064301  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.064325  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064381  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064395  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.064404  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064668  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064685  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064698  991880 addons.go:470] Verifying addon metrics-server=true in "no-preload-731976"
	I0314 19:26:06.066642  991880 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:26:02.945917  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.446120  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:03.906603  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.908049  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.359638  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:05.377722  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:05.377799  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:05.436279  992344 cri.go:89] found id: ""
	I0314 19:26:05.436316  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.436330  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:05.436338  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:05.436402  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:05.482775  992344 cri.go:89] found id: ""
	I0314 19:26:05.482822  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.482853  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:05.482861  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:05.482933  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:05.542954  992344 cri.go:89] found id: ""
	I0314 19:26:05.542986  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.542996  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:05.543003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:05.543069  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:05.582596  992344 cri.go:89] found id: ""
	I0314 19:26:05.582630  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.582643  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:05.582651  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:05.582716  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:05.623720  992344 cri.go:89] found id: ""
	I0314 19:26:05.623750  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.623762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:05.623770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:05.623828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:05.669868  992344 cri.go:89] found id: ""
	I0314 19:26:05.669946  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.669962  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:05.669974  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:05.670045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:05.718786  992344 cri.go:89] found id: ""
	I0314 19:26:05.718816  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.718827  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:05.718834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:05.718905  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:05.761781  992344 cri.go:89] found id: ""
	I0314 19:26:05.761817  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.761828  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:05.761841  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:05.761856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:05.826095  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:05.826131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:05.842893  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:05.842928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:05.937536  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:05.937567  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:05.937585  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:06.013419  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:06.013465  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:08.560995  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:08.576897  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:08.576964  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:08.617367  992344 cri.go:89] found id: ""
	I0314 19:26:08.617395  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.617406  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:08.617412  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:08.617471  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:08.655448  992344 cri.go:89] found id: ""
	I0314 19:26:08.655480  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.655492  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:08.655498  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:08.656004  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:08.696167  992344 cri.go:89] found id: ""
	I0314 19:26:08.696197  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.696206  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:08.696231  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:08.696294  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:06.067992  991880 addons.go:505] duration metric: took 1.572277081s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:26:06.722889  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:07.943306  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:09.945181  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.407517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:10.908715  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.736061  992344 cri.go:89] found id: ""
	I0314 19:26:08.736088  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.736096  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:08.736102  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:08.736168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:08.782458  992344 cri.go:89] found id: ""
	I0314 19:26:08.782490  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.782501  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:08.782508  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:08.782585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:08.833616  992344 cri.go:89] found id: ""
	I0314 19:26:08.833647  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.833659  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:08.833667  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:08.833734  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:08.875871  992344 cri.go:89] found id: ""
	I0314 19:26:08.875900  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.875909  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:08.875914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:08.875972  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:08.921763  992344 cri.go:89] found id: ""
	I0314 19:26:08.921793  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.921804  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:08.921816  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:08.921834  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:08.937716  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:08.937748  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:09.024271  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.024295  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:09.024309  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:09.098600  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:09.098636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:09.146178  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:09.146226  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:11.698261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:11.715209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:11.715285  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:11.758631  992344 cri.go:89] found id: ""
	I0314 19:26:11.758664  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.758680  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:11.758688  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:11.758758  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:11.798229  992344 cri.go:89] found id: ""
	I0314 19:26:11.798258  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.798268  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:11.798274  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:11.798341  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:11.838801  992344 cri.go:89] found id: ""
	I0314 19:26:11.838837  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.838849  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:11.838857  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:11.838925  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:11.884460  992344 cri.go:89] found id: ""
	I0314 19:26:11.884495  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.884507  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:11.884515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:11.884577  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:11.937743  992344 cri.go:89] found id: ""
	I0314 19:26:11.937770  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.937781  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:11.937789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:11.937852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:12.007509  992344 cri.go:89] found id: ""
	I0314 19:26:12.007542  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.007552  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:12.007561  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:12.007640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:12.068478  992344 cri.go:89] found id: ""
	I0314 19:26:12.068514  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.068523  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:12.068529  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:12.068592  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:12.108658  992344 cri.go:89] found id: ""
	I0314 19:26:12.108699  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.108712  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:12.108725  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:12.108754  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:12.195134  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:12.195170  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:12.240710  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:12.240746  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:12.297470  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:12.297506  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:12.312552  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:12.312581  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:12.392069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.222189  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:11.223717  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:12.226297  991880 node_ready.go:49] node "no-preload-731976" has status "Ready":"True"
	I0314 19:26:12.226328  991880 node_ready.go:38] duration metric: took 7.508398002s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:12.226343  991880 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:12.234015  991880 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242287  991880 pod_ready.go:92] pod "coredns-76f75df574-mcddh" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:12.242314  991880 pod_ready.go:81] duration metric: took 8.261811ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242325  991880 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252237  991880 pod_ready.go:92] pod "etcd-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:13.252268  991880 pod_ready.go:81] duration metric: took 1.00993426s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252277  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.443709  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.943804  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:13.407905  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:15.906891  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.907361  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.893036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:14.909532  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:14.909603  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:14.958974  992344 cri.go:89] found id: ""
	I0314 19:26:14.959001  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.959010  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:14.959016  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:14.959071  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:14.996462  992344 cri.go:89] found id: ""
	I0314 19:26:14.996496  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.996509  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:14.996516  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:14.996584  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:15.038159  992344 cri.go:89] found id: ""
	I0314 19:26:15.038192  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.038200  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:15.038214  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:15.038280  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:15.077455  992344 cri.go:89] found id: ""
	I0314 19:26:15.077486  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.077498  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:15.077506  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:15.077595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:15.117873  992344 cri.go:89] found id: ""
	I0314 19:26:15.117905  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.117914  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:15.117921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:15.117984  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:15.156493  992344 cri.go:89] found id: ""
	I0314 19:26:15.156528  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.156541  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:15.156549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:15.156615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:15.195036  992344 cri.go:89] found id: ""
	I0314 19:26:15.195065  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.195073  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:15.195079  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:15.195131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:15.237570  992344 cri.go:89] found id: ""
	I0314 19:26:15.237607  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.237619  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:15.237631  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:15.237646  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:15.323818  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:15.323871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:15.370068  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:15.370110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:15.425984  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:15.426018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.442475  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:15.442513  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:15.519714  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.019937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:18.036457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:18.036534  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:18.076226  992344 cri.go:89] found id: ""
	I0314 19:26:18.076256  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.076268  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:18.076275  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:18.076339  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:18.112355  992344 cri.go:89] found id: ""
	I0314 19:26:18.112390  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.112401  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:18.112409  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:18.112475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:18.148502  992344 cri.go:89] found id: ""
	I0314 19:26:18.148533  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.148544  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:18.148551  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:18.148625  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:18.185085  992344 cri.go:89] found id: ""
	I0314 19:26:18.185114  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.185121  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:18.185127  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:18.185192  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:18.226487  992344 cri.go:89] found id: ""
	I0314 19:26:18.226512  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.226520  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:18.226527  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:18.226595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:18.274014  992344 cri.go:89] found id: ""
	I0314 19:26:18.274044  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.274053  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:18.274062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:18.274155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:18.318696  992344 cri.go:89] found id: ""
	I0314 19:26:18.318729  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.318741  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:18.318749  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:18.318821  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:18.361430  992344 cri.go:89] found id: ""
	I0314 19:26:18.361459  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.361467  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:18.361477  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:18.361489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:18.442041  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.442062  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:18.442082  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:18.522821  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:18.522863  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:18.565896  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:18.565935  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:18.620887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:18.620924  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.268738  991880 pod_ready.go:102] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:16.758759  991880 pod_ready.go:92] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.758794  991880 pod_ready.go:81] duration metric: took 3.50650262s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.758807  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.763984  991880 pod_ready.go:92] pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.764010  991880 pod_ready.go:81] duration metric: took 5.192518ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.764021  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770418  991880 pod_ready.go:92] pod "kube-proxy-fkn7b" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.770442  991880 pod_ready.go:81] duration metric: took 6.412988ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770453  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775342  991880 pod_ready.go:92] pod "kube-scheduler-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.775367  991880 pod_ready.go:81] duration metric: took 4.906261ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775378  991880 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:18.782444  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.443755  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.446058  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.907866  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.407195  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.136379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:21.153065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:21.153159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:21.198345  992344 cri.go:89] found id: ""
	I0314 19:26:21.198376  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.198386  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:21.198393  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:21.198465  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:21.240699  992344 cri.go:89] found id: ""
	I0314 19:26:21.240738  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.240747  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:21.240753  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:21.240805  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:21.280891  992344 cri.go:89] found id: ""
	I0314 19:26:21.280978  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.280994  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:21.281004  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:21.281074  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:21.320316  992344 cri.go:89] found id: ""
	I0314 19:26:21.320348  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.320360  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:21.320369  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:21.320428  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:21.367972  992344 cri.go:89] found id: ""
	I0314 19:26:21.368006  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.368018  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:21.368024  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:21.368091  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:21.406060  992344 cri.go:89] found id: ""
	I0314 19:26:21.406090  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.406101  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:21.406108  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:21.406175  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:21.450885  992344 cri.go:89] found id: ""
	I0314 19:26:21.450908  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.450927  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:21.450933  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:21.450992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:21.497391  992344 cri.go:89] found id: ""
	I0314 19:26:21.497424  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.497436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:21.497453  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:21.497471  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:21.547789  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:21.547819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:21.604433  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:21.604482  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:21.619977  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:21.620005  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:21.695604  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:21.695629  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:21.695643  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:20.782765  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.786856  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.943234  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:23.943336  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:25.944005  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.407901  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:26.906562  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.274618  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:24.290815  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:24.290891  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:24.330657  992344 cri.go:89] found id: ""
	I0314 19:26:24.330694  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.330706  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:24.330718  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:24.330788  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:24.373140  992344 cri.go:89] found id: ""
	I0314 19:26:24.373192  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.373206  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:24.373214  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:24.373295  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:24.412131  992344 cri.go:89] found id: ""
	I0314 19:26:24.412161  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.412183  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:24.412191  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:24.412281  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:24.453506  992344 cri.go:89] found id: ""
	I0314 19:26:24.453535  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.453546  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:24.453554  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:24.453621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:24.495345  992344 cri.go:89] found id: ""
	I0314 19:26:24.495379  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.495391  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:24.495399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:24.495468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:24.534744  992344 cri.go:89] found id: ""
	I0314 19:26:24.534770  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.534779  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:24.534785  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:24.534847  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:24.573594  992344 cri.go:89] found id: ""
	I0314 19:26:24.573621  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.573629  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:24.573635  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:24.573685  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:24.612677  992344 cri.go:89] found id: ""
	I0314 19:26:24.612708  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.612718  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:24.612730  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:24.612747  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:24.664393  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:24.664426  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:24.679911  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:24.679945  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:24.767513  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:24.767560  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:24.767580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:24.853448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:24.853491  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.398576  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:27.414665  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:27.414749  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:27.461901  992344 cri.go:89] found id: ""
	I0314 19:26:27.461930  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.461938  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:27.461944  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:27.462009  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:27.502865  992344 cri.go:89] found id: ""
	I0314 19:26:27.502893  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.502902  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:27.502908  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:27.502966  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:27.542327  992344 cri.go:89] found id: ""
	I0314 19:26:27.542374  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.542387  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:27.542396  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:27.542484  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:27.583269  992344 cri.go:89] found id: ""
	I0314 19:26:27.583295  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.583304  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:27.583310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:27.583375  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:27.620426  992344 cri.go:89] found id: ""
	I0314 19:26:27.620467  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.620483  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:27.620491  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:27.620560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:27.659165  992344 cri.go:89] found id: ""
	I0314 19:26:27.659198  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.659214  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:27.659222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:27.659291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:27.701565  992344 cri.go:89] found id: ""
	I0314 19:26:27.701600  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.701609  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:27.701615  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:27.701706  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:27.739782  992344 cri.go:89] found id: ""
	I0314 19:26:27.739813  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.739822  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:27.739832  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:27.739847  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:27.757112  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:27.757146  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:27.844634  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:27.844670  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:27.844688  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:27.928687  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:27.928720  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.976582  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:27.976614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:25.282663  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:27.783551  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.443159  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.943660  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.908305  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.908486  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.536573  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:30.551552  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:30.551624  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.590498  992344 cri.go:89] found id: ""
	I0314 19:26:30.590528  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.590541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:30.590550  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:30.590612  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:30.629891  992344 cri.go:89] found id: ""
	I0314 19:26:30.629922  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.629945  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:30.629960  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:30.630031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:30.672557  992344 cri.go:89] found id: ""
	I0314 19:26:30.672592  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.672604  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:30.672611  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:30.672675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:30.709889  992344 cri.go:89] found id: ""
	I0314 19:26:30.709998  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.710026  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:30.710034  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:30.710103  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:30.749044  992344 cri.go:89] found id: ""
	I0314 19:26:30.749078  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.749090  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:30.749097  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:30.749167  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:30.794111  992344 cri.go:89] found id: ""
	I0314 19:26:30.794136  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.794146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:30.794154  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:30.794229  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:30.837175  992344 cri.go:89] found id: ""
	I0314 19:26:30.837204  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.837213  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:30.837220  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:30.837276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:30.875977  992344 cri.go:89] found id: ""
	I0314 19:26:30.876012  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.876026  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:30.876039  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:30.876077  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:30.965922  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:30.965963  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:31.011002  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:31.011041  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:31.067381  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:31.067415  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:31.082515  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:31.082547  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:31.158951  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:33.659376  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:33.673829  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:33.673889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.283175  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:32.285501  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.446301  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.942963  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.407396  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.906104  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.718619  992344 cri.go:89] found id: ""
	I0314 19:26:33.718655  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.718667  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:33.718675  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:33.718752  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:33.760408  992344 cri.go:89] found id: ""
	I0314 19:26:33.760443  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.760455  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:33.760463  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:33.760532  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:33.803648  992344 cri.go:89] found id: ""
	I0314 19:26:33.803683  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.803697  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:33.803706  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:33.803770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:33.845297  992344 cri.go:89] found id: ""
	I0314 19:26:33.845332  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.845344  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:33.845352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:33.845420  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:33.885826  992344 cri.go:89] found id: ""
	I0314 19:26:33.885862  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.885873  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:33.885881  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:33.885953  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:33.930611  992344 cri.go:89] found id: ""
	I0314 19:26:33.930641  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.930652  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:33.930659  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:33.930720  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:33.975523  992344 cri.go:89] found id: ""
	I0314 19:26:33.975558  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.975569  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:33.975592  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:33.975671  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:34.021004  992344 cri.go:89] found id: ""
	I0314 19:26:34.021039  992344 logs.go:276] 0 containers: []
	W0314 19:26:34.021048  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:34.021058  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:34.021072  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.066775  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:34.066808  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:34.123513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:34.123555  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:34.138355  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:34.138390  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:34.210698  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:34.210733  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:34.210752  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:36.801398  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:36.818486  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:36.818561  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:36.864485  992344 cri.go:89] found id: ""
	I0314 19:26:36.864510  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.864519  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:36.864525  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:36.864585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:36.908438  992344 cri.go:89] found id: ""
	I0314 19:26:36.908468  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.908478  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:36.908486  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:36.908554  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:36.947578  992344 cri.go:89] found id: ""
	I0314 19:26:36.947605  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.947613  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:36.947618  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:36.947664  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:36.985495  992344 cri.go:89] found id: ""
	I0314 19:26:36.985526  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.985537  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:36.985545  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:36.985609  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:37.027897  992344 cri.go:89] found id: ""
	I0314 19:26:37.027929  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.027947  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:37.027955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:37.028024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:37.066665  992344 cri.go:89] found id: ""
	I0314 19:26:37.066702  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.066716  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:37.066726  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:37.066818  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:37.104882  992344 cri.go:89] found id: ""
	I0314 19:26:37.104911  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.104920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:37.104926  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:37.104989  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:37.150288  992344 cri.go:89] found id: ""
	I0314 19:26:37.150318  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.150326  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:37.150338  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:37.150356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:37.207269  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:37.207314  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:37.222256  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:37.222290  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:37.305854  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:37.305879  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:37.305894  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:37.391306  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:37.391343  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.784650  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:37.283602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.444420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.943754  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.406563  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.407414  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.905944  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:39.939379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:39.955255  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:39.955317  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:39.996585  992344 cri.go:89] found id: ""
	I0314 19:26:39.996618  992344 logs.go:276] 0 containers: []
	W0314 19:26:39.996627  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:39.996633  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:39.996698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:40.038725  992344 cri.go:89] found id: ""
	I0314 19:26:40.038761  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.038774  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:40.038782  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:40.038846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:40.080619  992344 cri.go:89] found id: ""
	I0314 19:26:40.080656  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.080668  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:40.080677  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:40.080742  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:40.122120  992344 cri.go:89] found id: ""
	I0314 19:26:40.122163  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.122174  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:40.122182  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:40.122248  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:40.161563  992344 cri.go:89] found id: ""
	I0314 19:26:40.161594  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.161605  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:40.161612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:40.161680  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:40.200236  992344 cri.go:89] found id: ""
	I0314 19:26:40.200267  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.200278  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:40.200287  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:40.200358  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:40.237537  992344 cri.go:89] found id: ""
	I0314 19:26:40.237570  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.237581  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:40.237588  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:40.237657  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:40.293038  992344 cri.go:89] found id: ""
	I0314 19:26:40.293070  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.293078  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:40.293086  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:40.293110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:40.307710  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:40.307742  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:40.385255  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:40.385278  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:40.385312  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:40.469385  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:40.469421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:40.513030  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:40.513064  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.069286  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:43.086066  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:43.086183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:43.131373  992344 cri.go:89] found id: ""
	I0314 19:26:43.131400  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.131408  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:43.131414  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:43.131491  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:43.175283  992344 cri.go:89] found id: ""
	I0314 19:26:43.175311  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.175319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:43.175325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:43.175385  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:43.214979  992344 cri.go:89] found id: ""
	I0314 19:26:43.215006  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.215014  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:43.215020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:43.215072  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:43.252071  992344 cri.go:89] found id: ""
	I0314 19:26:43.252101  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.252110  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:43.252136  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:43.252200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:43.290310  992344 cri.go:89] found id: ""
	I0314 19:26:43.290341  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.290352  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:43.290359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:43.290426  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:43.330639  992344 cri.go:89] found id: ""
	I0314 19:26:43.330673  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.330684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:43.330692  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:43.330761  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:43.372669  992344 cri.go:89] found id: ""
	I0314 19:26:43.372698  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.372706  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:43.372712  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:43.372775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:43.416118  992344 cri.go:89] found id: ""
	I0314 19:26:43.416154  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.416171  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:43.416184  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:43.416225  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:43.501495  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:43.501541  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:43.545898  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:43.545932  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.601172  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:43.601205  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:43.616307  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:43.616339  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:43.699003  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:39.782316  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:41.783130  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:43.783259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.943853  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:45.446214  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:44.907328  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.406383  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:46.199661  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:46.214256  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:46.214325  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:46.263891  992344 cri.go:89] found id: ""
	I0314 19:26:46.263921  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.263932  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:46.263940  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:46.264006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:46.303515  992344 cri.go:89] found id: ""
	I0314 19:26:46.303542  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.303551  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:46.303558  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:46.303634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:46.346323  992344 cri.go:89] found id: ""
	I0314 19:26:46.346358  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.346371  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:46.346378  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:46.346444  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:46.388459  992344 cri.go:89] found id: ""
	I0314 19:26:46.388490  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.388500  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:46.388507  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:46.388560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:46.428907  992344 cri.go:89] found id: ""
	I0314 19:26:46.428945  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.428957  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:46.428966  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:46.429032  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:46.475683  992344 cri.go:89] found id: ""
	I0314 19:26:46.475713  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.475724  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:46.475737  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:46.475803  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:46.514509  992344 cri.go:89] found id: ""
	I0314 19:26:46.514543  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.514552  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:46.514558  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:46.514621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:46.553984  992344 cri.go:89] found id: ""
	I0314 19:26:46.554012  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.554023  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:46.554036  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:46.554054  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:46.615513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:46.615548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:46.630491  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:46.630525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:46.733214  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:46.733250  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:46.733267  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:46.832662  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:46.832699  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:45.783361  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:48.283626  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.943882  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.944184  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.409215  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.907278  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.382361  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:49.398159  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:49.398220  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:49.441989  992344 cri.go:89] found id: ""
	I0314 19:26:49.442017  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.442027  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:49.442034  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:49.442110  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:49.484456  992344 cri.go:89] found id: ""
	I0314 19:26:49.484492  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.484503  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:49.484520  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:49.484587  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:49.522409  992344 cri.go:89] found id: ""
	I0314 19:26:49.522438  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.522449  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:49.522456  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:49.522509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:49.556955  992344 cri.go:89] found id: ""
	I0314 19:26:49.556983  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.556991  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:49.556996  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:49.557045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:49.597924  992344 cri.go:89] found id: ""
	I0314 19:26:49.597960  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.597971  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:49.597987  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:49.598054  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:49.635744  992344 cri.go:89] found id: ""
	I0314 19:26:49.635780  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.635793  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:49.635801  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:49.635869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:49.678085  992344 cri.go:89] found id: ""
	I0314 19:26:49.678124  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.678136  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:49.678144  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:49.678247  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:49.714483  992344 cri.go:89] found id: ""
	I0314 19:26:49.714515  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.714527  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:49.714538  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:49.714554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:49.760438  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:49.760473  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:49.818954  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:49.818992  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:49.835609  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:49.835642  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:49.928723  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:49.928747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:49.928759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:52.517455  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:52.534986  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:52.535066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:52.580240  992344 cri.go:89] found id: ""
	I0314 19:26:52.580279  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.580292  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:52.580301  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:52.580367  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:52.644053  992344 cri.go:89] found id: ""
	I0314 19:26:52.644085  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.644096  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:52.644103  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:52.644171  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:52.706892  992344 cri.go:89] found id: ""
	I0314 19:26:52.706919  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.706928  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:52.706935  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:52.706986  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:52.761039  992344 cri.go:89] found id: ""
	I0314 19:26:52.761077  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.761090  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:52.761099  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:52.761173  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:52.806217  992344 cri.go:89] found id: ""
	I0314 19:26:52.806251  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.806263  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:52.806271  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:52.806415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:52.848417  992344 cri.go:89] found id: ""
	I0314 19:26:52.848448  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.848457  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:52.848464  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:52.848527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:52.890639  992344 cri.go:89] found id: ""
	I0314 19:26:52.890674  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.890687  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:52.890695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:52.890775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:52.934637  992344 cri.go:89] found id: ""
	I0314 19:26:52.934666  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.934677  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:52.934690  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:52.934707  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:52.949797  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:52.949825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:53.033720  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:53.033751  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:53.033766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:53.113919  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:53.113960  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:53.163483  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:53.163525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:50.781924  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:52.788346  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.945712  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:54.442871  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.443456  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:53.908184  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.407851  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:55.718119  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:55.733183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:55.733276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:55.778015  992344 cri.go:89] found id: ""
	I0314 19:26:55.778042  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.778050  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:55.778057  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:55.778146  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:55.829955  992344 cri.go:89] found id: ""
	I0314 19:26:55.829996  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.830011  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:55.830019  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:55.830089  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:55.872198  992344 cri.go:89] found id: ""
	I0314 19:26:55.872247  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.872260  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:55.872268  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:55.872327  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:55.916604  992344 cri.go:89] found id: ""
	I0314 19:26:55.916637  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.916649  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:55.916657  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:55.916725  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:55.957028  992344 cri.go:89] found id: ""
	I0314 19:26:55.957051  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.957060  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:55.957065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:55.957118  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:55.996640  992344 cri.go:89] found id: ""
	I0314 19:26:55.996671  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.996684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:55.996695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:55.996750  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:56.036638  992344 cri.go:89] found id: ""
	I0314 19:26:56.036688  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.036701  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:56.036709  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:56.036777  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:56.072594  992344 cri.go:89] found id: ""
	I0314 19:26:56.072624  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.072633  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:56.072643  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:56.072657  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:56.129011  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:56.129044  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:56.143042  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:56.143075  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:56.232545  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:56.232574  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:56.232589  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:56.317471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:56.317517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:55.282413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:57.283169  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.445079  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:00.942781  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.908918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.409159  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.864325  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:58.879029  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:58.879108  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:58.918490  992344 cri.go:89] found id: ""
	I0314 19:26:58.918519  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.918526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:58.918533  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:58.918598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:58.963392  992344 cri.go:89] found id: ""
	I0314 19:26:58.963423  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.963431  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:58.963437  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:58.963502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:59.007104  992344 cri.go:89] found id: ""
	I0314 19:26:59.007146  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.007158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:59.007166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:59.007235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:59.050075  992344 cri.go:89] found id: ""
	I0314 19:26:59.050114  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.050127  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:59.050138  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:59.050204  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:59.090262  992344 cri.go:89] found id: ""
	I0314 19:26:59.090289  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.090298  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:59.090303  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:59.090355  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:59.130556  992344 cri.go:89] found id: ""
	I0314 19:26:59.130584  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.130592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:59.130598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:59.130659  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:59.170640  992344 cri.go:89] found id: ""
	I0314 19:26:59.170670  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.170680  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:59.170689  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:59.170769  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:59.206456  992344 cri.go:89] found id: ""
	I0314 19:26:59.206494  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.206503  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:59.206513  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:59.206533  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:59.285760  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:59.285781  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:59.285793  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:59.363143  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:59.363182  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:59.415614  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:59.415655  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:59.470619  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:59.470661  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:01.987397  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:02.004152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:02.004243  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:02.050022  992344 cri.go:89] found id: ""
	I0314 19:27:02.050056  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.050068  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:02.050075  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:02.050144  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:02.089639  992344 cri.go:89] found id: ""
	I0314 19:27:02.089666  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.089674  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:02.089680  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:02.089740  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:02.128368  992344 cri.go:89] found id: ""
	I0314 19:27:02.128400  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.128409  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:02.128415  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:02.128468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:02.165609  992344 cri.go:89] found id: ""
	I0314 19:27:02.165651  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.165664  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:02.165672  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:02.165745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:02.204317  992344 cri.go:89] found id: ""
	I0314 19:27:02.204347  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.204359  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:02.204367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:02.204436  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:02.247897  992344 cri.go:89] found id: ""
	I0314 19:27:02.247931  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.247943  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:02.247951  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:02.248025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:02.287938  992344 cri.go:89] found id: ""
	I0314 19:27:02.287967  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.287979  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:02.287985  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:02.288057  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:02.324712  992344 cri.go:89] found id: ""
	I0314 19:27:02.324739  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.324751  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:02.324762  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:02.324779  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:02.400908  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:02.400932  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:02.400953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:02.489797  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:02.489830  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:02.540134  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:02.540168  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:02.599093  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:02.599128  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:59.283757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.782785  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:02.946825  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.447529  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:03.906952  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.909530  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.115036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:05.130479  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:05.130562  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:05.174573  992344 cri.go:89] found id: ""
	I0314 19:27:05.174605  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.174617  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:05.174624  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:05.174689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:05.212508  992344 cri.go:89] found id: ""
	I0314 19:27:05.212535  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.212546  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:05.212554  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:05.212621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:05.250714  992344 cri.go:89] found id: ""
	I0314 19:27:05.250750  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.250762  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:05.250770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:05.250839  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:05.291691  992344 cri.go:89] found id: ""
	I0314 19:27:05.291714  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.291722  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:05.291728  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:05.291775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:05.332275  992344 cri.go:89] found id: ""
	I0314 19:27:05.332302  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.332311  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:05.332318  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:05.332384  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:05.370048  992344 cri.go:89] found id: ""
	I0314 19:27:05.370075  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.370084  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:05.370090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:05.370163  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:05.413797  992344 cri.go:89] found id: ""
	I0314 19:27:05.413825  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.413836  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:05.413844  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:05.413909  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:05.454295  992344 cri.go:89] found id: ""
	I0314 19:27:05.454321  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.454329  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:05.454341  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:05.454359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:05.509578  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:05.509614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:05.525317  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:05.525347  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:05.607550  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:05.607576  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:05.607593  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:05.690865  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:05.690904  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.233183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:08.249612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:08.249679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:08.298188  992344 cri.go:89] found id: ""
	I0314 19:27:08.298226  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.298238  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:08.298247  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:08.298310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:08.339102  992344 cri.go:89] found id: ""
	I0314 19:27:08.339132  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.339141  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:08.339148  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:08.339208  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:08.377029  992344 cri.go:89] found id: ""
	I0314 19:27:08.377060  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.377068  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:08.377074  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:08.377131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:08.414418  992344 cri.go:89] found id: ""
	I0314 19:27:08.414450  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.414461  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:08.414468  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:08.414528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:08.454027  992344 cri.go:89] found id: ""
	I0314 19:27:08.454057  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.454068  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:08.454076  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:08.454134  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:08.494818  992344 cri.go:89] found id: ""
	I0314 19:27:08.494847  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.494856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:08.494863  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:08.494927  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:08.534522  992344 cri.go:89] found id: ""
	I0314 19:27:08.534557  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.534567  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:08.534575  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:08.534637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:08.572164  992344 cri.go:89] found id: ""
	I0314 19:27:08.572197  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.572241  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:08.572257  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:08.572275  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:08.588223  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:08.588261  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:08.675851  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:08.675877  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:08.675889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:04.282689  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:06.284132  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.783530  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:07.448817  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:09.944024  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.407848  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:10.408924  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:12.907004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.763975  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:08.764014  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.813516  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:08.813552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.370525  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:11.385556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:11.385645  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:11.426788  992344 cri.go:89] found id: ""
	I0314 19:27:11.426823  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.426831  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:11.426837  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:11.426910  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:11.465752  992344 cri.go:89] found id: ""
	I0314 19:27:11.465786  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.465794  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:11.465801  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:11.465849  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:11.506855  992344 cri.go:89] found id: ""
	I0314 19:27:11.506890  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.506904  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:11.506912  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:11.506973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:11.548844  992344 cri.go:89] found id: ""
	I0314 19:27:11.548880  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.548891  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:11.548900  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:11.548960  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:11.590828  992344 cri.go:89] found id: ""
	I0314 19:27:11.590861  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.590872  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:11.590880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:11.590952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:11.631863  992344 cri.go:89] found id: ""
	I0314 19:27:11.631892  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.631904  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:11.631913  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:11.631975  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:11.670204  992344 cri.go:89] found id: ""
	I0314 19:27:11.670230  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.670238  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:11.670244  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:11.670293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:11.711946  992344 cri.go:89] found id: ""
	I0314 19:27:11.711980  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.711991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:11.712005  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:11.712026  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.766647  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:11.766682  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:11.784449  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:11.784475  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:11.866503  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:11.866536  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:11.866552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:11.952506  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:11.952538  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:10.787653  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:13.282424  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:11.945870  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.444491  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.909465  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.918004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.502903  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:14.518020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:14.518084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:14.555486  992344 cri.go:89] found id: ""
	I0314 19:27:14.555528  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.555541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:14.555552  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:14.555615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:14.592068  992344 cri.go:89] found id: ""
	I0314 19:27:14.592102  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.592113  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:14.592121  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:14.592186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:14.633353  992344 cri.go:89] found id: ""
	I0314 19:27:14.633408  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.633418  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:14.633425  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:14.633490  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:14.670897  992344 cri.go:89] found id: ""
	I0314 19:27:14.670935  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.670947  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:14.670955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:14.671024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:14.713838  992344 cri.go:89] found id: ""
	I0314 19:27:14.713874  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.713884  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:14.713890  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:14.713957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:14.751113  992344 cri.go:89] found id: ""
	I0314 19:27:14.751143  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.751151  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:14.751158  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:14.751209  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:14.792485  992344 cri.go:89] found id: ""
	I0314 19:27:14.792518  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.792535  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:14.792542  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:14.792606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:14.839250  992344 cri.go:89] found id: ""
	I0314 19:27:14.839284  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.839297  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:14.839309  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:14.839325  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:14.880384  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:14.880421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:14.941515  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:14.941549  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:14.958810  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:14.958836  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:15.048586  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:15.048610  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:15.048625  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:17.640280  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:17.655841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:17.655901  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:17.698205  992344 cri.go:89] found id: ""
	I0314 19:27:17.698242  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.698254  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:17.698261  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:17.698315  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:17.740854  992344 cri.go:89] found id: ""
	I0314 19:27:17.740892  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.740903  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:17.740910  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:17.740980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:17.783317  992344 cri.go:89] found id: ""
	I0314 19:27:17.783409  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.783426  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:17.783434  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:17.783499  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:17.823514  992344 cri.go:89] found id: ""
	I0314 19:27:17.823541  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.823550  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:17.823556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:17.823606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:17.859249  992344 cri.go:89] found id: ""
	I0314 19:27:17.859288  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.859301  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:17.859310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:17.859386  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:17.900636  992344 cri.go:89] found id: ""
	I0314 19:27:17.900670  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.900688  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:17.900703  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:17.900770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:17.939927  992344 cri.go:89] found id: ""
	I0314 19:27:17.939959  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.939970  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:17.939979  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:17.940048  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:17.980507  992344 cri.go:89] found id: ""
	I0314 19:27:17.980539  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.980551  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:17.980563  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:17.980580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:18.037887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:18.037925  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:18.054506  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:18.054544  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:18.129987  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:18.130006  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:18.130018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:18.210364  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:18.210400  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:15.282905  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:17.283421  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.943922  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.448400  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.406315  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.407142  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:20.758599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:20.775419  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:20.775480  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:20.814427  992344 cri.go:89] found id: ""
	I0314 19:27:20.814457  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.814469  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:20.814476  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:20.814528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:20.851020  992344 cri.go:89] found id: ""
	I0314 19:27:20.851056  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.851069  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:20.851077  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:20.851150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:20.894746  992344 cri.go:89] found id: ""
	I0314 19:27:20.894775  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.894784  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:20.894790  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:20.894856  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:20.932852  992344 cri.go:89] found id: ""
	I0314 19:27:20.932884  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.932895  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:20.932903  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:20.932962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:20.977294  992344 cri.go:89] found id: ""
	I0314 19:27:20.977329  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.977341  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:20.977349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:20.977417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:21.018980  992344 cri.go:89] found id: ""
	I0314 19:27:21.019016  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.019027  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:21.019036  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:21.019102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:21.058764  992344 cri.go:89] found id: ""
	I0314 19:27:21.058817  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.058832  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:21.058841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:21.058915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:21.098126  992344 cri.go:89] found id: ""
	I0314 19:27:21.098168  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.098181  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:21.098194  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:21.098211  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:21.154456  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:21.154490  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:21.170919  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:21.170950  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:21.247945  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:21.247973  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:21.247991  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:21.345152  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:21.345193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:19.782502  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.783005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.944293  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.945097  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.442970  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.907276  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:25.907425  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:27.907517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.900146  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:23.917834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:23.917896  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:23.959759  992344 cri.go:89] found id: ""
	I0314 19:27:23.959787  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.959800  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:23.959808  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:23.959875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:23.999841  992344 cri.go:89] found id: ""
	I0314 19:27:23.999871  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.999880  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:23.999887  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:23.999942  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:24.044031  992344 cri.go:89] found id: ""
	I0314 19:27:24.044063  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.044072  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:24.044078  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:24.044149  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:24.089895  992344 cri.go:89] found id: ""
	I0314 19:27:24.089931  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.089944  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:24.089955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:24.090023  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:24.131286  992344 cri.go:89] found id: ""
	I0314 19:27:24.131319  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.131331  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:24.131338  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:24.131409  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:24.169376  992344 cri.go:89] found id: ""
	I0314 19:27:24.169408  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.169420  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:24.169428  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:24.169495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:24.215123  992344 cri.go:89] found id: ""
	I0314 19:27:24.215150  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.215159  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:24.215165  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:24.215219  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:24.257440  992344 cri.go:89] found id: ""
	I0314 19:27:24.257476  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.257484  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:24.257494  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:24.257508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.311885  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:24.311916  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:24.326375  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:24.326403  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:24.403176  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:24.403207  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:24.403227  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:24.485890  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:24.485928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.032675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:27.050221  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:27.050310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:27.091708  992344 cri.go:89] found id: ""
	I0314 19:27:27.091739  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.091750  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:27.091761  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:27.091828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:27.135279  992344 cri.go:89] found id: ""
	I0314 19:27:27.135317  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.135329  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:27.135337  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:27.135407  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:27.178163  992344 cri.go:89] found id: ""
	I0314 19:27:27.178194  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.178203  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:27.178209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:27.178259  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:27.220298  992344 cri.go:89] found id: ""
	I0314 19:27:27.220331  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.220341  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:27.220367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:27.220423  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:27.262087  992344 cri.go:89] found id: ""
	I0314 19:27:27.262122  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.262135  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:27.262143  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:27.262305  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:27.304543  992344 cri.go:89] found id: ""
	I0314 19:27:27.304576  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.304587  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:27.304597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:27.304668  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:27.343860  992344 cri.go:89] found id: ""
	I0314 19:27:27.343889  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.343899  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:27.343905  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:27.343974  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:27.383608  992344 cri.go:89] found id: ""
	I0314 19:27:27.383639  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.383649  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:27.383659  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:27.383673  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:27.398443  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:27.398478  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:27.485215  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:27.485240  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:27.485254  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:27.564067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:27.564110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.608472  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:27.608511  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.284517  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.783079  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:28.443938  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.445579  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:29.908018  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.406717  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.169228  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:30.183876  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:30.183952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:30.229357  992344 cri.go:89] found id: ""
	I0314 19:27:30.229390  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.229401  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:30.229407  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:30.229474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:30.272970  992344 cri.go:89] found id: ""
	I0314 19:27:30.273007  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.273021  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:30.273030  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:30.273116  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:30.314939  992344 cri.go:89] found id: ""
	I0314 19:27:30.314968  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.314976  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:30.314982  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:30.315031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:30.350602  992344 cri.go:89] found id: ""
	I0314 19:27:30.350633  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.350644  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:30.350652  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:30.350739  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:30.393907  992344 cri.go:89] found id: ""
	I0314 19:27:30.393939  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.393950  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:30.393958  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:30.394029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:30.431943  992344 cri.go:89] found id: ""
	I0314 19:27:30.431974  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.431983  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:30.431991  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:30.432058  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:30.471873  992344 cri.go:89] found id: ""
	I0314 19:27:30.471900  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.471910  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:30.471918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:30.471981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:30.508842  992344 cri.go:89] found id: ""
	I0314 19:27:30.508865  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.508872  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:30.508882  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:30.508896  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:30.587441  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:30.587471  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:30.587489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:30.670580  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:30.670618  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:30.719846  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:30.719882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:30.779463  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:30.779508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.296251  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:33.311393  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:33.311452  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:33.351846  992344 cri.go:89] found id: ""
	I0314 19:27:33.351879  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.351889  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:33.351898  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:33.351965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:33.396392  992344 cri.go:89] found id: ""
	I0314 19:27:33.396494  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.396523  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:33.396546  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:33.396637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:33.451093  992344 cri.go:89] found id: ""
	I0314 19:27:33.451120  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.451130  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:33.451149  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:33.451225  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:33.511427  992344 cri.go:89] found id: ""
	I0314 19:27:33.511474  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.511487  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:33.511495  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:33.511570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:33.560459  992344 cri.go:89] found id: ""
	I0314 19:27:33.560488  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.560500  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:33.560509  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:33.560579  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:33.601454  992344 cri.go:89] found id: ""
	I0314 19:27:33.601491  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.601503  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:33.601512  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:33.601588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:33.640991  992344 cri.go:89] found id: ""
	I0314 19:27:33.641029  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.641042  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:33.641050  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:33.641115  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:33.684359  992344 cri.go:89] found id: ""
	I0314 19:27:33.684390  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.684398  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:33.684412  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:33.684436  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.699551  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:33.699583  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:27:29.285022  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:31.782413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:33.782490  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.942996  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.943285  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.407243  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.407509  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:27:33.781859  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:33.781893  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:33.781909  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:33.864992  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:33.865036  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:33.911670  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:33.911712  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.466570  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:36.483515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:36.483611  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:36.522488  992344 cri.go:89] found id: ""
	I0314 19:27:36.522521  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.522533  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:36.522549  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:36.522607  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:36.561676  992344 cri.go:89] found id: ""
	I0314 19:27:36.561714  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.561728  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:36.561737  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:36.561810  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:36.604512  992344 cri.go:89] found id: ""
	I0314 19:27:36.604547  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.604559  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:36.604568  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:36.604640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:36.645387  992344 cri.go:89] found id: ""
	I0314 19:27:36.645416  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.645425  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:36.645430  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:36.645495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:36.682951  992344 cri.go:89] found id: ""
	I0314 19:27:36.682976  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.682984  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:36.682989  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:36.683040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:36.725464  992344 cri.go:89] found id: ""
	I0314 19:27:36.725517  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.725530  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:36.725538  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:36.725601  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:36.766542  992344 cri.go:89] found id: ""
	I0314 19:27:36.766578  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.766590  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:36.766598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:36.766663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:36.809745  992344 cri.go:89] found id: ""
	I0314 19:27:36.809773  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.809782  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:36.809791  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:36.809805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.863035  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:36.863069  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:36.877162  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:36.877195  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:36.952727  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:36.952747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:36.952759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:37.035914  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:37.035953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:35.783567  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:37.786255  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.944521  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.445911  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:41.446549  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:38.409392  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:40.914692  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.581600  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:39.595798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:39.595875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:39.635374  992344 cri.go:89] found id: ""
	I0314 19:27:39.635406  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.635418  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:39.635426  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:39.635488  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:39.674527  992344 cri.go:89] found id: ""
	I0314 19:27:39.674560  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.674571  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:39.674579  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:39.674649  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:39.714313  992344 cri.go:89] found id: ""
	I0314 19:27:39.714357  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.714370  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:39.714380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:39.714449  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:39.754346  992344 cri.go:89] found id: ""
	I0314 19:27:39.754383  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.754395  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:39.754402  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:39.754468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:39.799448  992344 cri.go:89] found id: ""
	I0314 19:27:39.799481  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.799493  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:39.799500  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:39.799551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:39.841550  992344 cri.go:89] found id: ""
	I0314 19:27:39.841582  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.841592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:39.841601  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:39.841673  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:39.878581  992344 cri.go:89] found id: ""
	I0314 19:27:39.878612  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.878624  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:39.878630  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:39.878681  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:39.917419  992344 cri.go:89] found id: ""
	I0314 19:27:39.917444  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.917454  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:39.917465  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:39.917480  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:39.976304  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:39.976340  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:39.993786  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:39.993825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:40.074428  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.074458  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:40.074481  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:40.156135  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:40.156177  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:42.700758  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:42.716600  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:42.716672  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:42.763646  992344 cri.go:89] found id: ""
	I0314 19:27:42.763682  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.763694  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:42.763702  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:42.763770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:42.804246  992344 cri.go:89] found id: ""
	I0314 19:27:42.804280  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.804288  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:42.804295  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:42.804360  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:42.847415  992344 cri.go:89] found id: ""
	I0314 19:27:42.847445  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.847455  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:42.847463  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:42.847527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:42.884340  992344 cri.go:89] found id: ""
	I0314 19:27:42.884376  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.884386  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:42.884395  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:42.884464  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:42.923583  992344 cri.go:89] found id: ""
	I0314 19:27:42.923615  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.923634  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:42.923642  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:42.923704  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:42.969164  992344 cri.go:89] found id: ""
	I0314 19:27:42.969195  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.969207  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:42.969215  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:42.969291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:43.013760  992344 cri.go:89] found id: ""
	I0314 19:27:43.013793  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.013802  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:43.013808  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:43.013881  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:43.056930  992344 cri.go:89] found id: ""
	I0314 19:27:43.056964  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.056976  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:43.056989  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:43.057004  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:43.145067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:43.145104  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:43.196679  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:43.196714  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:43.252329  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:43.252363  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:43.268635  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:43.268663  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:43.353391  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.284010  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:42.784684  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.447800  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.943282  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.409130  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.908067  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.853793  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:45.867904  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:45.867971  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:45.909352  992344 cri.go:89] found id: ""
	I0314 19:27:45.909376  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.909387  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:45.909394  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:45.909451  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:45.950885  992344 cri.go:89] found id: ""
	I0314 19:27:45.950920  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.950931  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:45.950939  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:45.951006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:45.987907  992344 cri.go:89] found id: ""
	I0314 19:27:45.987940  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.987951  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:45.987959  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:45.988025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:46.026894  992344 cri.go:89] found id: ""
	I0314 19:27:46.026930  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.026942  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:46.026950  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:46.027047  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:46.074867  992344 cri.go:89] found id: ""
	I0314 19:27:46.074901  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.074911  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:46.074918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:46.074981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:46.111516  992344 cri.go:89] found id: ""
	I0314 19:27:46.111551  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.111562  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:46.111570  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:46.111633  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:46.151560  992344 cri.go:89] found id: ""
	I0314 19:27:46.151590  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.151601  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:46.151610  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:46.151674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:46.191684  992344 cri.go:89] found id: ""
	I0314 19:27:46.191719  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.191730  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:46.191742  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:46.191757  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:46.245152  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:46.245189  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:46.261705  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:46.261741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:46.342381  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:46.342409  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:46.342424  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:46.437995  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:46.438031  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:45.283412  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:47.782838  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.443371  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.446353  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.406887  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.408726  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.410088  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.981814  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:48.998620  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:48.998689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:49.040608  992344 cri.go:89] found id: ""
	I0314 19:27:49.040643  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.040653  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:49.040659  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:49.040711  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:49.083505  992344 cri.go:89] found id: ""
	I0314 19:27:49.083531  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.083539  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:49.083544  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:49.083606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:49.127355  992344 cri.go:89] found id: ""
	I0314 19:27:49.127383  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.127391  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:49.127399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:49.127472  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:49.165694  992344 cri.go:89] found id: ""
	I0314 19:27:49.165726  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.165738  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:49.165746  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:49.165813  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:49.209407  992344 cri.go:89] found id: ""
	I0314 19:27:49.209440  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.209449  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:49.209455  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:49.209516  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:49.250450  992344 cri.go:89] found id: ""
	I0314 19:27:49.250482  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.250493  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:49.250499  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:49.250560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:49.294041  992344 cri.go:89] found id: ""
	I0314 19:27:49.294070  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.294079  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:49.294085  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:49.294150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:49.333664  992344 cri.go:89] found id: ""
	I0314 19:27:49.333706  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.333719  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:49.333731  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:49.333749  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:49.348323  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:49.348351  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:49.428896  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:49.428917  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:49.428929  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:49.510395  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:49.510431  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:49.553630  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:49.553669  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.105763  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:52.120888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:52.120956  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:52.158143  992344 cri.go:89] found id: ""
	I0314 19:27:52.158174  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.158188  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:52.158196  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:52.158271  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:52.198254  992344 cri.go:89] found id: ""
	I0314 19:27:52.198285  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.198294  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:52.198299  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:52.198372  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:52.237973  992344 cri.go:89] found id: ""
	I0314 19:27:52.238001  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.238009  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:52.238015  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:52.238066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:52.283766  992344 cri.go:89] found id: ""
	I0314 19:27:52.283798  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.283809  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:52.283817  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:52.283889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:52.325861  992344 cri.go:89] found id: ""
	I0314 19:27:52.325896  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.325906  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:52.325914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:52.325983  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:52.367582  992344 cri.go:89] found id: ""
	I0314 19:27:52.367612  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.367622  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:52.367631  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:52.367698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:52.405009  992344 cri.go:89] found id: ""
	I0314 19:27:52.405043  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.405054  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:52.405062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:52.405125  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:52.447560  992344 cri.go:89] found id: ""
	I0314 19:27:52.447584  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.447594  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:52.447605  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:52.447620  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:52.519023  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:52.519048  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:52.519062  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:52.603256  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:52.603297  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:52.650926  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:52.650957  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.708743  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:52.708784  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:50.284259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.286540  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.944152  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.446257  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:54.910578  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.407194  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.225549  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:55.242914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:55.242992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:55.284249  992344 cri.go:89] found id: ""
	I0314 19:27:55.284280  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.284291  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:55.284298  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:55.284362  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:55.333784  992344 cri.go:89] found id: ""
	I0314 19:27:55.333821  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.333833  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:55.333840  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:55.333916  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:55.375444  992344 cri.go:89] found id: ""
	I0314 19:27:55.375498  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.375511  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:55.375519  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:55.375598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:55.416225  992344 cri.go:89] found id: ""
	I0314 19:27:55.416259  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.416269  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:55.416276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:55.416340  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:55.461097  992344 cri.go:89] found id: ""
	I0314 19:27:55.461138  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.461150  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:55.461166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:55.461235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:55.504621  992344 cri.go:89] found id: ""
	I0314 19:27:55.504659  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.504670  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:55.504679  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:55.504755  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:55.545075  992344 cri.go:89] found id: ""
	I0314 19:27:55.545111  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.545123  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:55.545130  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:55.545221  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:55.584137  992344 cri.go:89] found id: ""
	I0314 19:27:55.584197  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.584235  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:55.584252  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:55.584274  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:55.642705  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:55.642741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:55.657487  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:55.657516  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:55.738379  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:55.738414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:55.738432  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:55.827582  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:55.827621  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:58.374265  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:58.389764  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:58.389878  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:58.431760  992344 cri.go:89] found id: ""
	I0314 19:27:58.431798  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.431810  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:58.431818  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:58.431880  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:58.471389  992344 cri.go:89] found id: ""
	I0314 19:27:58.471415  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.471424  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:58.471430  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:58.471478  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:58.508875  992344 cri.go:89] found id: ""
	I0314 19:27:58.508903  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.508910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:58.508916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:58.508965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:58.546016  992344 cri.go:89] found id: ""
	I0314 19:27:58.546042  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.546051  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:58.546057  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:58.546106  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:58.586319  992344 cri.go:89] found id: ""
	I0314 19:27:58.586346  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.586354  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:58.586360  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:58.586414  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:58.625381  992344 cri.go:89] found id: ""
	I0314 19:27:58.625411  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.625423  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:58.625431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:58.625494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:58.663016  992344 cri.go:89] found id: ""
	I0314 19:27:58.663047  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.663059  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:58.663068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:58.663131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:58.703100  992344 cri.go:89] found id: ""
	I0314 19:27:58.703144  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.703159  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:58.703172  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:58.703190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:54.782936  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:56.783549  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.783602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.943515  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:00.443787  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:59.908025  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:01.908119  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.755081  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:58.755116  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:58.770547  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:58.770577  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:58.850354  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:58.850379  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:58.850395  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:58.944115  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:58.944152  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.489937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:01.505233  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:01.505309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:01.544381  992344 cri.go:89] found id: ""
	I0314 19:28:01.544417  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.544429  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:01.544437  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:01.544502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:01.582639  992344 cri.go:89] found id: ""
	I0314 19:28:01.582668  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.582676  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:01.582684  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:01.582745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:01.621926  992344 cri.go:89] found id: ""
	I0314 19:28:01.621957  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.621968  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:01.621976  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:01.622040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:01.659749  992344 cri.go:89] found id: ""
	I0314 19:28:01.659779  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.659791  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:01.659798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:01.659869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:01.696467  992344 cri.go:89] found id: ""
	I0314 19:28:01.696497  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.696505  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:01.696511  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:01.696570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:01.735273  992344 cri.go:89] found id: ""
	I0314 19:28:01.735301  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.735310  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:01.735316  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:01.735381  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:01.777051  992344 cri.go:89] found id: ""
	I0314 19:28:01.777081  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.777090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:01.777096  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:01.777155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:01.820851  992344 cri.go:89] found id: ""
	I0314 19:28:01.820883  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.820894  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:01.820911  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:01.820926  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:01.874599  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:01.874632  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:01.888971  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:01.889007  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:01.971786  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:01.971806  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:01.971819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:02.064070  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:02.064114  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.283565  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.782799  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:02.446196  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.944339  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.917838  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:06.409597  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.610064  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:04.625349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:04.625417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:04.664254  992344 cri.go:89] found id: ""
	I0314 19:28:04.664284  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.664293  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:04.664299  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:04.664348  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:04.704466  992344 cri.go:89] found id: ""
	I0314 19:28:04.704502  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.704514  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:04.704523  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:04.704588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:04.745733  992344 cri.go:89] found id: ""
	I0314 19:28:04.745762  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.745773  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:04.745781  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:04.745846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:04.790435  992344 cri.go:89] found id: ""
	I0314 19:28:04.790465  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.790477  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:04.790485  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:04.790550  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:04.829215  992344 cri.go:89] found id: ""
	I0314 19:28:04.829255  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.829268  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:04.829276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:04.829343  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:04.874200  992344 cri.go:89] found id: ""
	I0314 19:28:04.874234  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.874246  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:04.874253  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:04.874318  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:04.915882  992344 cri.go:89] found id: ""
	I0314 19:28:04.915909  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.915920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:04.915928  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:04.915994  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:04.954000  992344 cri.go:89] found id: ""
	I0314 19:28:04.954027  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.954038  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:04.954049  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:04.954063  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:04.996511  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:04.996540  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:05.049608  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:05.049644  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:05.064401  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:05.064437  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:05.145169  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:05.145189  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:05.145202  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:07.734535  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:07.765003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:07.765099  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:07.814489  992344 cri.go:89] found id: ""
	I0314 19:28:07.814518  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.814526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:07.814532  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:07.814595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:07.854337  992344 cri.go:89] found id: ""
	I0314 19:28:07.854368  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.854378  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:07.854384  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:07.854455  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:07.894430  992344 cri.go:89] found id: ""
	I0314 19:28:07.894465  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.894479  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:07.894487  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:07.894551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:07.939473  992344 cri.go:89] found id: ""
	I0314 19:28:07.939504  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.939515  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:07.939524  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:07.939591  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:07.982584  992344 cri.go:89] found id: ""
	I0314 19:28:07.982627  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.982640  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:07.982649  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:07.982710  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:08.020038  992344 cri.go:89] found id: ""
	I0314 19:28:08.020065  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.020074  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:08.020080  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:08.020138  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:08.058377  992344 cri.go:89] found id: ""
	I0314 19:28:08.058412  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.058423  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:08.058431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:08.058509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:08.096241  992344 cri.go:89] found id: ""
	I0314 19:28:08.096273  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.096284  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:08.096294  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:08.096308  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:08.174276  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:08.174315  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:08.221249  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:08.221282  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:08.273899  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:08.273930  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:08.290166  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:08.290193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:08.382154  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:06.283073  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.784385  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:07.447546  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:09.448168  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.906422  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.907030  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.882385  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:10.898126  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:10.898200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:10.939972  992344 cri.go:89] found id: ""
	I0314 19:28:10.940001  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.940012  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:10.940019  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:10.940084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:10.985154  992344 cri.go:89] found id: ""
	I0314 19:28:10.985187  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.985199  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:10.985212  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:10.985278  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:11.023955  992344 cri.go:89] found id: ""
	I0314 19:28:11.024004  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.024017  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:11.024025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:11.024094  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:11.065508  992344 cri.go:89] found id: ""
	I0314 19:28:11.065534  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.065543  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:11.065549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:11.065620  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:11.103903  992344 cri.go:89] found id: ""
	I0314 19:28:11.103930  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.103938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:11.103944  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:11.103997  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:11.146820  992344 cri.go:89] found id: ""
	I0314 19:28:11.146856  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.146866  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:11.146873  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:11.146930  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:11.195840  992344 cri.go:89] found id: ""
	I0314 19:28:11.195871  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.195880  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:11.195888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:11.195957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:11.237594  992344 cri.go:89] found id: ""
	I0314 19:28:11.237628  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.237647  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:11.237658  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:11.237671  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:11.297323  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:11.297356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:11.313785  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:11.313815  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:11.393416  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:11.393444  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:11.393461  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:11.472938  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:11.472972  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:11.283364  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.283657  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:11.945477  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.443000  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.406341  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:15.905918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.907047  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.025870  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:14.039597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:14.039667  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:14.076786  992344 cri.go:89] found id: ""
	I0314 19:28:14.076822  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.076834  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:14.076842  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:14.076911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:14.114754  992344 cri.go:89] found id: ""
	I0314 19:28:14.114796  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.114815  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:14.114823  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:14.114893  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:14.158360  992344 cri.go:89] found id: ""
	I0314 19:28:14.158396  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.158408  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:14.158417  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:14.158489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:14.208587  992344 cri.go:89] found id: ""
	I0314 19:28:14.208626  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.208638  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:14.208646  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:14.208712  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:14.253013  992344 cri.go:89] found id: ""
	I0314 19:28:14.253049  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.253062  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:14.253071  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:14.253142  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:14.313793  992344 cri.go:89] found id: ""
	I0314 19:28:14.313830  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.313843  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:14.313851  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:14.313918  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:14.352044  992344 cri.go:89] found id: ""
	I0314 19:28:14.352076  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.352087  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:14.352094  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:14.352161  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:14.389393  992344 cri.go:89] found id: ""
	I0314 19:28:14.389427  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.389436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:14.389446  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:14.389464  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:14.447873  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:14.447914  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:14.462610  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:14.462636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:14.543393  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:14.543414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:14.543427  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:14.628147  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:14.628190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.177617  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:17.193408  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:17.193481  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:17.233133  992344 cri.go:89] found id: ""
	I0314 19:28:17.233161  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.233170  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:17.233183  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:17.233252  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:17.270429  992344 cri.go:89] found id: ""
	I0314 19:28:17.270459  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.270471  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:17.270479  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:17.270559  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:17.309915  992344 cri.go:89] found id: ""
	I0314 19:28:17.309939  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.309947  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:17.309952  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:17.309999  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:17.347157  992344 cri.go:89] found id: ""
	I0314 19:28:17.347188  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.347199  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:17.347206  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:17.347269  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:17.388837  992344 cri.go:89] found id: ""
	I0314 19:28:17.388866  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.388877  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:17.388884  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:17.388948  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:17.425945  992344 cri.go:89] found id: ""
	I0314 19:28:17.425969  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.425977  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:17.425983  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:17.426051  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:17.470291  992344 cri.go:89] found id: ""
	I0314 19:28:17.470320  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.470356  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:17.470365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:17.470424  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:17.507512  992344 cri.go:89] found id: ""
	I0314 19:28:17.507541  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.507549  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:17.507559  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:17.507575  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.550148  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:17.550186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:17.603728  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:17.603759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:17.619160  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:17.619186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:17.699649  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:17.699683  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:17.699701  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:15.782780  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.783204  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:16.942941  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:18.943420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:21.450329  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.407873  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.905658  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.284486  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:20.300132  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:20.300198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:20.341566  992344 cri.go:89] found id: ""
	I0314 19:28:20.341608  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.341620  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:20.341629  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:20.341700  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:20.379527  992344 cri.go:89] found id: ""
	I0314 19:28:20.379555  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.379562  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:20.379568  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:20.379640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:20.425871  992344 cri.go:89] found id: ""
	I0314 19:28:20.425902  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.425910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:20.425916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:20.425980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:20.464939  992344 cri.go:89] found id: ""
	I0314 19:28:20.464979  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.464993  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:20.465003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:20.465075  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:20.500954  992344 cri.go:89] found id: ""
	I0314 19:28:20.500982  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.500993  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:20.501001  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:20.501063  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:20.542049  992344 cri.go:89] found id: ""
	I0314 19:28:20.542080  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.542090  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:20.542098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:20.542178  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:20.577298  992344 cri.go:89] found id: ""
	I0314 19:28:20.577325  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.577333  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:20.577340  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:20.577389  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.618467  992344 cri.go:89] found id: ""
	I0314 19:28:20.618498  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.618511  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:20.618523  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:20.618537  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:20.694238  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:20.694280  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:20.694298  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:20.778845  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:20.778882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:20.821575  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:20.821606  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:20.876025  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:20.876061  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.391129  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:23.408183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:23.408276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:23.449128  992344 cri.go:89] found id: ""
	I0314 19:28:23.449169  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.449180  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:23.449186  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:23.449276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:23.486168  992344 cri.go:89] found id: ""
	I0314 19:28:23.486201  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.486223  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:23.486242  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:23.486299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:23.525452  992344 cri.go:89] found id: ""
	I0314 19:28:23.525484  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.525492  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:23.525498  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:23.525553  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:23.560947  992344 cri.go:89] found id: ""
	I0314 19:28:23.560982  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.561037  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:23.561054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:23.561121  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:23.607261  992344 cri.go:89] found id: ""
	I0314 19:28:23.607298  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.607310  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:23.607317  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:23.607392  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:23.646849  992344 cri.go:89] found id: ""
	I0314 19:28:23.646881  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.646891  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:23.646896  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:23.646962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:23.684108  992344 cri.go:89] found id: ""
	I0314 19:28:23.684133  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.684140  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:23.684146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:23.684197  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.283546  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.783059  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.942845  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:25.943049  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:24.905817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.908404  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.723284  992344 cri.go:89] found id: ""
	I0314 19:28:23.723320  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.723331  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:23.723343  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:23.723359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:23.785024  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:23.785066  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.801136  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:23.801167  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:23.875721  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:23.875749  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:23.875766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:23.969377  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:23.969420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:26.517771  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:26.533260  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:26.533349  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:26.573712  992344 cri.go:89] found id: ""
	I0314 19:28:26.573750  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.573762  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:26.573770  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:26.573846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:26.610738  992344 cri.go:89] found id: ""
	I0314 19:28:26.610768  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.610777  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:26.610783  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:26.610836  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:26.652014  992344 cri.go:89] found id: ""
	I0314 19:28:26.652041  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.652049  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:26.652054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:26.652109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:26.687344  992344 cri.go:89] found id: ""
	I0314 19:28:26.687377  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.687389  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:26.687398  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:26.687466  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:26.725897  992344 cri.go:89] found id: ""
	I0314 19:28:26.725926  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.725938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:26.725945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:26.726008  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:26.772328  992344 cri.go:89] found id: ""
	I0314 19:28:26.772357  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.772367  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:26.772375  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:26.772440  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:26.814721  992344 cri.go:89] found id: ""
	I0314 19:28:26.814757  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.814768  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:26.814776  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:26.814841  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:26.849726  992344 cri.go:89] found id: ""
	I0314 19:28:26.849763  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.849781  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:26.849794  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:26.849811  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:26.932680  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:26.932709  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:26.932725  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:27.011721  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:27.011787  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:27.059121  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:27.059160  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:27.110392  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:27.110430  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:24.783160  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.783703  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:27.943562  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.943880  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.405973  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.407122  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.625784  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:29.642945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:29.643024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:29.681233  992344 cri.go:89] found id: ""
	I0314 19:28:29.681267  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.681279  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:29.681286  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:29.681351  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:29.729735  992344 cri.go:89] found id: ""
	I0314 19:28:29.729764  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.729773  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:29.729779  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:29.729835  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:29.773873  992344 cri.go:89] found id: ""
	I0314 19:28:29.773902  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.773911  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:29.773918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:29.773973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:29.815982  992344 cri.go:89] found id: ""
	I0314 19:28:29.816009  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.816019  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:29.816025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:29.816102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:29.855295  992344 cri.go:89] found id: ""
	I0314 19:28:29.855328  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.855343  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:29.855349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:29.855404  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:29.893580  992344 cri.go:89] found id: ""
	I0314 19:28:29.893618  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.893630  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:29.893638  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:29.893705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:29.939721  992344 cri.go:89] found id: ""
	I0314 19:28:29.939752  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.939763  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:29.939770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:29.939837  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:29.978277  992344 cri.go:89] found id: ""
	I0314 19:28:29.978315  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.978328  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:29.978347  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:29.978362  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:30.031723  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:30.031761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:30.046940  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:30.046968  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:30.124190  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:30.124226  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:30.124244  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:30.203448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:30.203488  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:32.756750  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:32.772599  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:32.772679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:32.812033  992344 cri.go:89] found id: ""
	I0314 19:28:32.812061  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.812069  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:32.812076  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:32.812165  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:32.855461  992344 cri.go:89] found id: ""
	I0314 19:28:32.855490  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.855501  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:32.855509  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:32.855575  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:32.900644  992344 cri.go:89] found id: ""
	I0314 19:28:32.900675  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.900686  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:32.900694  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:32.900772  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:32.942120  992344 cri.go:89] found id: ""
	I0314 19:28:32.942155  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.942166  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:32.942175  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:32.942238  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:32.981325  992344 cri.go:89] found id: ""
	I0314 19:28:32.981352  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.981360  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:32.981367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:32.981419  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:33.019732  992344 cri.go:89] found id: ""
	I0314 19:28:33.019767  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.019781  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:33.019789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:33.019852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:33.060205  992344 cri.go:89] found id: ""
	I0314 19:28:33.060262  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.060274  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:33.060283  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:33.060350  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:33.100456  992344 cri.go:89] found id: ""
	I0314 19:28:33.100490  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.100517  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:33.100529  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:33.100548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:33.114637  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:33.114668  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:33.186983  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:33.187010  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:33.187024  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:33.268816  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:33.268856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:33.314600  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:33.314634  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:29.282840  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.783516  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:32.443948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:34.942835  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:33.906912  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.908364  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.870832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:35.886088  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:35.886168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:35.929548  992344 cri.go:89] found id: ""
	I0314 19:28:35.929580  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.929590  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:35.929598  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:35.929675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:35.970315  992344 cri.go:89] found id: ""
	I0314 19:28:35.970351  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.970364  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:35.970372  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:35.970438  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:36.010663  992344 cri.go:89] found id: ""
	I0314 19:28:36.010696  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.010716  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:36.010723  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:36.010806  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:36.055521  992344 cri.go:89] found id: ""
	I0314 19:28:36.055558  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.055569  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:36.055578  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:36.055648  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:36.095768  992344 cri.go:89] found id: ""
	I0314 19:28:36.095799  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.095810  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:36.095821  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:36.095875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:36.132820  992344 cri.go:89] found id: ""
	I0314 19:28:36.132848  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.132856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:36.132861  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:36.132915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:36.173162  992344 cri.go:89] found id: ""
	I0314 19:28:36.173196  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.173209  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:36.173217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:36.173287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:36.211796  992344 cri.go:89] found id: ""
	I0314 19:28:36.211822  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.211830  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:36.211839  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:36.211854  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:36.271494  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:36.271536  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:36.289341  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:36.289366  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:36.368331  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:36.368361  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:36.368378  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:36.448945  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:36.448993  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:34.283005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.286755  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.781678  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.943412  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:39.450650  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.407910  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:40.409015  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:42.906420  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.995675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:39.009626  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:39.009705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:39.051085  992344 cri.go:89] found id: ""
	I0314 19:28:39.051119  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.051128  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:39.051134  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:39.051184  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:39.090167  992344 cri.go:89] found id: ""
	I0314 19:28:39.090201  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.090214  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:39.090221  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:39.090293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:39.129345  992344 cri.go:89] found id: ""
	I0314 19:28:39.129388  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.129404  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:39.129411  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:39.129475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:39.166678  992344 cri.go:89] found id: ""
	I0314 19:28:39.166731  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.166741  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:39.166750  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:39.166822  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:39.206329  992344 cri.go:89] found id: ""
	I0314 19:28:39.206368  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.206381  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:39.206389  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:39.206442  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:39.245158  992344 cri.go:89] found id: ""
	I0314 19:28:39.245187  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.245196  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:39.245202  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:39.245253  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:39.289207  992344 cri.go:89] found id: ""
	I0314 19:28:39.289243  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.289259  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:39.289267  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:39.289335  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:39.327437  992344 cri.go:89] found id: ""
	I0314 19:28:39.327462  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.327472  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:39.327484  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:39.327500  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:39.381681  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:39.381724  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:39.397060  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:39.397097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:39.482718  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:39.482744  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:39.482761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:39.566304  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:39.566349  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.111937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:42.126968  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:42.127033  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:42.168671  992344 cri.go:89] found id: ""
	I0314 19:28:42.168701  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.168713  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:42.168721  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:42.168792  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:42.213285  992344 cri.go:89] found id: ""
	I0314 19:28:42.213311  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.213319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:42.213325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:42.213388  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:42.255036  992344 cri.go:89] found id: ""
	I0314 19:28:42.255075  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.255085  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:42.255090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:42.255159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:42.296863  992344 cri.go:89] found id: ""
	I0314 19:28:42.296896  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.296907  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:42.296915  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:42.296978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:42.338346  992344 cri.go:89] found id: ""
	I0314 19:28:42.338402  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.338413  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:42.338421  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:42.338489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:42.374667  992344 cri.go:89] found id: ""
	I0314 19:28:42.374691  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.374699  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:42.374711  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:42.374774  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:42.412676  992344 cri.go:89] found id: ""
	I0314 19:28:42.412702  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.412713  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:42.412721  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:42.412786  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:42.451093  992344 cri.go:89] found id: ""
	I0314 19:28:42.451125  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.451135  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:42.451147  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:42.451162  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:42.531130  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:42.531176  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.576583  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:42.576623  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:42.633675  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:42.633715  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:42.650154  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:42.650188  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:42.731282  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:41.282876  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.283770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:41.942349  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.942831  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.943723  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:44.907134  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:46.907817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.231813  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:45.246939  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:45.247029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:45.289033  992344 cri.go:89] found id: ""
	I0314 19:28:45.289057  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.289066  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:45.289071  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:45.289128  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:45.327007  992344 cri.go:89] found id: ""
	I0314 19:28:45.327034  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.327043  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:45.327048  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:45.327109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:45.363725  992344 cri.go:89] found id: ""
	I0314 19:28:45.363757  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.363770  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:45.363778  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:45.363833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:45.400775  992344 cri.go:89] found id: ""
	I0314 19:28:45.400808  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.400819  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:45.400826  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:45.400887  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:45.438717  992344 cri.go:89] found id: ""
	I0314 19:28:45.438750  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.438762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:45.438770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:45.438833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:45.483296  992344 cri.go:89] found id: ""
	I0314 19:28:45.483334  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.483349  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:45.483355  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:45.483406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:45.519840  992344 cri.go:89] found id: ""
	I0314 19:28:45.519872  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.519881  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:45.519887  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:45.519939  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:45.560535  992344 cri.go:89] found id: ""
	I0314 19:28:45.560565  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.560577  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:45.560590  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:45.560613  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:45.639453  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:45.639476  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:45.639489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:45.724224  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:45.724265  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:45.768456  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:45.768494  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:45.828111  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:45.828154  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.345352  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:48.358823  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:48.358879  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:48.401545  992344 cri.go:89] found id: ""
	I0314 19:28:48.401575  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.401586  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:48.401595  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:48.401655  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:48.442031  992344 cri.go:89] found id: ""
	I0314 19:28:48.442062  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.442073  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:48.442081  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:48.442186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:48.481192  992344 cri.go:89] found id: ""
	I0314 19:28:48.481230  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.481239  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:48.481245  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:48.481309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:48.522127  992344 cri.go:89] found id: ""
	I0314 19:28:48.522162  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.522171  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:48.522177  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:48.522233  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:48.562763  992344 cri.go:89] found id: ""
	I0314 19:28:48.562791  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.562800  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:48.562806  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:48.562866  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:48.606256  992344 cri.go:89] found id: ""
	I0314 19:28:48.606290  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.606300  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:48.606309  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:48.606376  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:48.645493  992344 cri.go:89] found id: ""
	I0314 19:28:48.645527  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.645539  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:48.645547  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:48.645634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:48.686145  992344 cri.go:89] found id: ""
	I0314 19:28:48.686177  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.686189  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:48.686202  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:48.686229  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.701771  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:48.701812  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:28:45.784389  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.283921  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.443564  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.445062  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.909434  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.910456  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:28:48.783905  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:48.783931  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:48.783947  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:48.863824  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:48.863868  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:48.919421  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:48.919456  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:51.491562  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:51.507427  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:51.507494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:51.549290  992344 cri.go:89] found id: ""
	I0314 19:28:51.549325  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.549337  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:51.549344  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:51.549415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:51.587540  992344 cri.go:89] found id: ""
	I0314 19:28:51.587575  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.587588  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:51.587595  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:51.587663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:51.629187  992344 cri.go:89] found id: ""
	I0314 19:28:51.629221  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.629229  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:51.629235  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:51.629299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:51.670884  992344 cri.go:89] found id: ""
	I0314 19:28:51.670913  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.670921  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:51.670927  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:51.670978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:51.712751  992344 cri.go:89] found id: ""
	I0314 19:28:51.712783  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.712794  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:51.712802  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:51.712873  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:51.751462  992344 cri.go:89] found id: ""
	I0314 19:28:51.751490  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.751499  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:51.751505  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:51.751572  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:51.793049  992344 cri.go:89] found id: ""
	I0314 19:28:51.793079  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.793090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:51.793098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:51.793166  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:51.834793  992344 cri.go:89] found id: ""
	I0314 19:28:51.834825  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.834837  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:51.834850  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:51.834871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:51.851743  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:51.851792  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:51.927748  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:51.927768  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:51.927780  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:52.011674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:52.011718  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:52.067015  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:52.067059  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:50.783127  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.783450  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.942964  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:54.945540  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:53.407301  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:55.907357  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:56.900342  992056 pod_ready.go:81] duration metric: took 4m0.000959023s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	E0314 19:28:56.900373  992056 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:28:56.900392  992056 pod_ready.go:38] duration metric: took 4m15.050031566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:28:56.900431  992056 kubeadm.go:591] duration metric: took 4m22.457881244s to restartPrimaryControlPlane
	W0314 19:28:56.900513  992056 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:28:56.900549  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:28:54.623820  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:54.641380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:54.641459  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:54.699381  992344 cri.go:89] found id: ""
	I0314 19:28:54.699418  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.699430  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:54.699439  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:54.699507  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:54.752793  992344 cri.go:89] found id: ""
	I0314 19:28:54.752843  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.752865  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:54.752873  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:54.752980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:54.805116  992344 cri.go:89] found id: ""
	I0314 19:28:54.805148  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.805158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:54.805166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:54.805231  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:54.843303  992344 cri.go:89] found id: ""
	I0314 19:28:54.843336  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.843346  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:54.843352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:54.843406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:54.879789  992344 cri.go:89] found id: ""
	I0314 19:28:54.879822  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.879834  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:54.879840  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:54.879911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:54.921874  992344 cri.go:89] found id: ""
	I0314 19:28:54.921903  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.921913  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:54.921921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:54.922005  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:54.966098  992344 cri.go:89] found id: ""
	I0314 19:28:54.966129  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.966137  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:54.966146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:54.966201  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:55.005963  992344 cri.go:89] found id: ""
	I0314 19:28:55.005995  992344 logs.go:276] 0 containers: []
	W0314 19:28:55.006006  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:55.006019  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:55.006035  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:55.063802  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:55.063838  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:55.079126  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:55.079157  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:55.156174  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:55.156200  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:55.156241  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:55.237471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:55.237517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:57.786574  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:57.804359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:57.804446  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:57.843520  992344 cri.go:89] found id: ""
	I0314 19:28:57.843554  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.843566  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:57.843574  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:57.843642  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:57.883350  992344 cri.go:89] found id: ""
	I0314 19:28:57.883385  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.883398  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:57.883408  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:57.883502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:57.926544  992344 cri.go:89] found id: ""
	I0314 19:28:57.926578  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.926589  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:57.926597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:57.926674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:57.969832  992344 cri.go:89] found id: ""
	I0314 19:28:57.969861  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.969873  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:57.969880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:57.969951  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:58.021915  992344 cri.go:89] found id: ""
	I0314 19:28:58.021952  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.021964  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:58.021972  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:58.022043  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:58.068004  992344 cri.go:89] found id: ""
	I0314 19:28:58.068045  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.068059  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:58.068067  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:58.068147  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:58.109350  992344 cri.go:89] found id: ""
	I0314 19:28:58.109385  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.109397  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:58.109405  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:58.109474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:58.149505  992344 cri.go:89] found id: ""
	I0314 19:28:58.149600  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.149617  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:58.149631  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:58.149648  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:58.165051  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:58.165097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:58.260306  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:58.260334  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:58.260360  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:58.347229  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:58.347270  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:58.394506  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:58.394546  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:54.783620  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.282809  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.444954  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:59.450968  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.452967  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:00.965332  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:00.982169  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:29:00.982254  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:29:01.023125  992344 cri.go:89] found id: ""
	I0314 19:29:01.023161  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.023174  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:29:01.023182  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:29:01.023258  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:29:01.073622  992344 cri.go:89] found id: ""
	I0314 19:29:01.073663  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.073688  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:29:01.073697  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:29:01.073762  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:29:01.128431  992344 cri.go:89] found id: ""
	I0314 19:29:01.128459  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.128468  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:29:01.128474  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:29:01.128538  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:29:01.175167  992344 cri.go:89] found id: ""
	I0314 19:29:01.175196  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.175214  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:29:01.175222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:29:01.175287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:29:01.219999  992344 cri.go:89] found id: ""
	I0314 19:29:01.220030  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.220041  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:29:01.220049  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:29:01.220114  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:29:01.267917  992344 cri.go:89] found id: ""
	I0314 19:29:01.267946  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.267954  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:29:01.267961  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:29:01.268010  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:29:01.308402  992344 cri.go:89] found id: ""
	I0314 19:29:01.308437  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.308450  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:29:01.308457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:29:01.308527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:29:01.354953  992344 cri.go:89] found id: ""
	I0314 19:29:01.354982  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.354991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:29:01.355001  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:29:01.355016  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:29:01.409088  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:29:01.409131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:29:01.424936  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:29:01.424965  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:29:01.517636  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:29:01.517673  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:29:01.517691  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:29:01.632674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:29:01.632731  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:59.284185  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.783757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:03.943195  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:05.943902  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:04.185418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:04.199946  992344 kubeadm.go:591] duration metric: took 4m3.891459486s to restartPrimaryControlPlane
	W0314 19:29:04.200023  992344 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:04.200050  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:05.838695  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.638615727s)
	I0314 19:29:05.838799  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:05.858457  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:05.870547  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:05.881784  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:05.881805  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:05.881853  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:05.892847  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:05.892892  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:05.904430  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:05.914971  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:05.915037  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:05.925984  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.935559  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:05.935615  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.947405  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:05.958132  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:05.958177  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:05.968975  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:06.219425  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:04.283772  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:06.785802  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:07.950776  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:10.445766  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:09.282584  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:11.783655  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:12.942948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.944204  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.282089  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:16.282920  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:18.283071  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:17.446142  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:19.447118  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:21.447921  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:20.284298  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:22.782760  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:23.452826  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:25.944013  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:24.785109  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:27.282770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:28.443907  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:30.447194  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:29.271454  992056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.370871229s)
	I0314 19:29:29.271543  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:29.288947  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:29.299822  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:29.309955  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:29.309972  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:29.310004  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:29.320229  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:29.320285  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:29.331509  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:29.342985  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:29.343046  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:29.352805  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.363317  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:29.363376  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.374226  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:29.384400  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:29.384444  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:29.394962  992056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:29.631020  992056 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:29.283297  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:31.782029  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:33.783415  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:32.447974  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:34.943668  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:35.786587  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.282404  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.891396  992056 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:29:38.891457  992056 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:29:38.891550  992056 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:29:38.891703  992056 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:29:38.891857  992056 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:29:38.891965  992056 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:29:38.893298  992056 out.go:204]   - Generating certificates and keys ...
	I0314 19:29:38.893420  992056 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:29:38.893526  992056 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:29:38.893637  992056 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:29:38.893727  992056 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:29:38.893833  992056 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:29:38.893931  992056 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:29:38.894042  992056 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:29:38.894147  992056 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:29:38.894249  992056 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:29:38.894351  992056 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:29:38.894413  992056 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:29:38.894483  992056 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:29:38.894564  992056 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:29:38.894648  992056 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:29:38.894740  992056 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:29:38.894825  992056 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:29:38.894942  992056 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:29:38.895027  992056 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:29:38.896425  992056 out.go:204]   - Booting up control plane ...
	I0314 19:29:38.896545  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:29:38.896665  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:29:38.896773  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:29:38.896879  992056 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:29:38.896980  992056 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:29:38.897045  992056 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:29:38.897200  992056 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:29:38.897278  992056 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.504738 seconds
	I0314 19:29:38.897390  992056 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:29:38.897574  992056 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:29:38.897680  992056 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:29:38.897920  992056 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-992669 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:29:38.897993  992056 kubeadm.go:309] [bootstrap-token] Using token: wr0inu.l2vxagywmdawjzpm
	I0314 19:29:38.899387  992056 out.go:204]   - Configuring RBAC rules ...
	I0314 19:29:38.899518  992056 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:29:38.899597  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:29:38.899790  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:29:38.899950  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:29:38.900097  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:29:38.900225  992056 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:29:38.900389  992056 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:29:38.900449  992056 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:29:38.900514  992056 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:29:38.900523  992056 kubeadm.go:309] 
	I0314 19:29:38.900615  992056 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:29:38.900638  992056 kubeadm.go:309] 
	I0314 19:29:38.900743  992056 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:29:38.900753  992056 kubeadm.go:309] 
	I0314 19:29:38.900788  992056 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:29:38.900872  992056 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:29:38.900945  992056 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:29:38.900954  992056 kubeadm.go:309] 
	I0314 19:29:38.901031  992056 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:29:38.901042  992056 kubeadm.go:309] 
	I0314 19:29:38.901111  992056 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:29:38.901124  992056 kubeadm.go:309] 
	I0314 19:29:38.901202  992056 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:29:38.901312  992056 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:29:38.901433  992056 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:29:38.901446  992056 kubeadm.go:309] 
	I0314 19:29:38.901523  992056 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:29:38.901614  992056 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:29:38.901624  992056 kubeadm.go:309] 
	I0314 19:29:38.901743  992056 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.901842  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:29:38.901862  992056 kubeadm.go:309] 	--control-plane 
	I0314 19:29:38.901865  992056 kubeadm.go:309] 
	I0314 19:29:38.901933  992056 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:29:38.901943  992056 kubeadm.go:309] 
	I0314 19:29:38.902025  992056 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.902185  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:29:38.902212  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:29:38.902222  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:29:38.903643  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:29:36.944055  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.945642  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:39.437026  992563 pod_ready.go:81] duration metric: took 4m0.000967236s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	E0314 19:29:39.437057  992563 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:29:39.437072  992563 pod_ready.go:38] duration metric: took 4m7.55729252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:39.437098  992563 kubeadm.go:591] duration metric: took 4m15.521374831s to restartPrimaryControlPlane
	W0314 19:29:39.437168  992563 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:39.437200  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:38.904945  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:29:38.921860  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:29:38.958963  992056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:29:38.959064  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:38.959065  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-992669 minikube.k8s.io/updated_at=2024_03_14T19_29_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=embed-certs-992669 minikube.k8s.io/primary=true
	I0314 19:29:39.310627  992056 ops.go:34] apiserver oom_adj: -16
	I0314 19:29:39.310807  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:39.811730  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.311090  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.811674  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.311488  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.811640  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.310976  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.811336  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.283716  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:42.784841  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:43.311472  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:43.811668  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.311072  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.811108  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.311743  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.811197  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.311720  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.810955  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.311810  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.811633  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.282898  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:47.786855  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:48.310845  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:48.811747  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.310862  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.811100  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.311383  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.811660  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.311496  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.565143  992056 kubeadm.go:1106] duration metric: took 12.606155275s to wait for elevateKubeSystemPrivileges
	W0314 19:29:51.565200  992056 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:29:51.565210  992056 kubeadm.go:393] duration metric: took 5m17.173193727s to StartCluster
	I0314 19:29:51.565243  992056 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.565344  992056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:29:51.567430  992056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.567800  992056 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:29:51.570366  992056 out.go:177] * Verifying Kubernetes components...
	I0314 19:29:51.567870  992056 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:29:51.568004  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:29:51.571834  992056 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-992669"
	I0314 19:29:51.571847  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:29:51.571872  992056 addons.go:69] Setting metrics-server=true in profile "embed-certs-992669"
	I0314 19:29:51.571922  992056 addons.go:234] Setting addon metrics-server=true in "embed-certs-992669"
	W0314 19:29:51.571942  992056 addons.go:243] addon metrics-server should already be in state true
	I0314 19:29:51.571981  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571884  992056 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-992669"
	W0314 19:29:51.572025  992056 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:29:51.572056  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571842  992056 addons.go:69] Setting default-storageclass=true in profile "embed-certs-992669"
	I0314 19:29:51.572143  992056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-992669"
	I0314 19:29:51.572563  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572578  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572597  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572567  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572611  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572665  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.595116  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0314 19:29:51.595142  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0314 19:29:51.595156  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
	I0314 19:29:51.595736  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595837  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595892  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.596363  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596382  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596560  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596545  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596788  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.596895  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597022  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597213  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.597463  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597488  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597536  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.597498  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.601587  992056 addons.go:234] Setting addon default-storageclass=true in "embed-certs-992669"
	W0314 19:29:51.601612  992056 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:29:51.601644  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.602034  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.602069  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.613696  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I0314 19:29:51.614277  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.614924  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.614957  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.615340  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.615518  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.616192  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0314 19:29:51.616643  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.617453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.617661  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.617680  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.619738  992056 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:29:51.618228  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.621267  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:29:51.621284  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:29:51.621299  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.619984  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.622057  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I0314 19:29:51.622533  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.623169  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.623184  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.623511  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.623600  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.625179  992056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:29:51.625183  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.627022  992056 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:51.627052  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:29:51.627074  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.624457  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.627169  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.625872  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.627276  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.626052  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.627505  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.628272  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.628593  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.630213  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630764  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.630788  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630870  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.631065  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.631483  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.631681  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.645022  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0314 19:29:51.645562  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.646147  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.646172  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.646551  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.646766  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.648424  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.648674  992056 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:51.648690  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:29:51.648702  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.651513  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652188  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.652197  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.652220  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652395  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.652552  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.652655  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.845568  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:29:51.865551  992056 node_ready.go:35] waiting up to 6m0s for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875093  992056 node_ready.go:49] node "embed-certs-992669" has status "Ready":"True"
	I0314 19:29:51.875111  992056 node_ready.go:38] duration metric: took 9.53464ms for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875123  992056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:51.883535  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:51.979907  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:52.034281  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:29:52.034312  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:29:52.060831  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:52.124847  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:29:52.124885  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:29:52.289209  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:52.289239  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:29:52.374833  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:50.286539  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:52.298408  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:53.393013  992056 pod_ready.go:92] pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.393048  992056 pod_ready.go:81] duration metric: took 1.509482935s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.393060  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401449  992056 pod_ready.go:92] pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.401476  992056 pod_ready.go:81] duration metric: took 8.407286ms for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401486  992056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406465  992056 pod_ready.go:92] pod "etcd-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.406492  992056 pod_ready.go:81] duration metric: took 4.997468ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406502  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412923  992056 pod_ready.go:92] pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.412954  992056 pod_ready.go:81] duration metric: took 6.441869ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412966  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469519  992056 pod_ready.go:92] pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.469552  992056 pod_ready.go:81] duration metric: took 56.57628ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469566  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.582001  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.602041099s)
	I0314 19:29:53.582078  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582096  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.582484  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582500  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582521  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582795  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582813  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582853  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.590184  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.590202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.590451  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.590487  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.886717  992056 pod_ready.go:92] pod "kube-proxy-hzhsp" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.886741  992056 pod_ready.go:81] duration metric: took 417.167569ms for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.886751  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.965815  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.904943117s)
	I0314 19:29:53.965875  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.965887  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.966214  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.966240  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.966239  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.966249  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.966305  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.967958  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.968169  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.968187  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.996956  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.622074464s)
	I0314 19:29:53.997019  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997033  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997356  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.997378  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997400  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997415  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997427  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997740  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997758  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997771  992056 addons.go:470] Verifying addon metrics-server=true in "embed-certs-992669"
	I0314 19:29:53.999390  992056 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:29:54.000743  992056 addons.go:505] duration metric: took 2.432877042s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:29:54.270407  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:54.270432  992056 pod_ready.go:81] duration metric: took 383.674695ms for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:54.270440  992056 pod_ready.go:38] duration metric: took 2.395303637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:54.270455  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:29:54.270521  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:54.293083  992056 api_server.go:72] duration metric: took 2.725234796s to wait for apiserver process to appear ...
	I0314 19:29:54.293113  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:29:54.293164  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:29:54.302466  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:29:54.304317  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:29:54.304342  992056 api_server.go:131] duration metric: took 11.220873ms to wait for apiserver health ...
	I0314 19:29:54.304353  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:29:54.479241  992056 system_pods.go:59] 9 kube-system pods found
	I0314 19:29:54.479276  992056 system_pods.go:61] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.479282  992056 system_pods.go:61] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.479288  992056 system_pods.go:61] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.479294  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.479299  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.479305  992056 system_pods.go:61] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.479310  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.479318  992056 system_pods.go:61] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.479325  992056 system_pods.go:61] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.479340  992056 system_pods.go:74] duration metric: took 174.978725ms to wait for pod list to return data ...
	I0314 19:29:54.479358  992056 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:29:54.668682  992056 default_sa.go:45] found service account: "default"
	I0314 19:29:54.668714  992056 default_sa.go:55] duration metric: took 189.346747ms for default service account to be created ...
	I0314 19:29:54.668727  992056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:29:54.873128  992056 system_pods.go:86] 9 kube-system pods found
	I0314 19:29:54.873161  992056 system_pods.go:89] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.873169  992056 system_pods.go:89] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.873175  992056 system_pods.go:89] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.873184  992056 system_pods.go:89] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.873189  992056 system_pods.go:89] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.873194  992056 system_pods.go:89] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.873199  992056 system_pods.go:89] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.873211  992056 system_pods.go:89] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.873222  992056 system_pods.go:89] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.873244  992056 system_pods.go:126] duration metric: took 204.509108ms to wait for k8s-apps to be running ...
	I0314 19:29:54.873256  992056 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:29:54.873311  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:54.890288  992056 system_svc.go:56] duration metric: took 17.021036ms WaitForService to wait for kubelet
	I0314 19:29:54.890320  992056 kubeadm.go:576] duration metric: took 3.322477642s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:29:54.890347  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:29:55.069429  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:29:55.069458  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:29:55.069506  992056 node_conditions.go:105] duration metric: took 179.148222ms to run NodePressure ...
	I0314 19:29:55.069521  992056 start.go:240] waiting for startup goroutines ...
	I0314 19:29:55.069529  992056 start.go:245] waiting for cluster config update ...
	I0314 19:29:55.069543  992056 start.go:254] writing updated cluster config ...
	I0314 19:29:55.069881  992056 ssh_runner.go:195] Run: rm -f paused
	I0314 19:29:55.129829  992056 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:29:55.131816  992056 out.go:177] * Done! kubectl is now configured to use "embed-certs-992669" cluster and "default" namespace by default
	I0314 19:29:54.784171  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:57.281882  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:59.282486  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:01.282694  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:03.782088  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:05.785281  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:08.282878  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:10.782495  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:12.785319  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:11.911432  992563 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.474198952s)
	I0314 19:30:11.911536  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:11.930130  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:30:11.942380  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:30:11.954695  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:30:11.954724  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:30:11.954795  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:30:11.966696  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:30:11.966772  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:30:11.980074  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:30:11.991635  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:30:11.991728  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:30:12.004984  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.016196  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:30:12.016271  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.027974  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:30:12.039057  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:30:12.039110  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:30:12.050231  992563 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:30:12.272978  992563 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:30:15.284464  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:16.784336  991880 pod_ready.go:81] duration metric: took 4m0.008931629s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:30:16.784369  991880 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 19:30:16.784378  991880 pod_ready.go:38] duration metric: took 4m4.558023355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:16.784398  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:16.784436  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:16.784511  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:16.853550  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:16.853582  991880 cri.go:89] found id: ""
	I0314 19:30:16.853592  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:16.853657  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.858963  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:16.859036  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:16.920573  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:16.920607  991880 cri.go:89] found id: ""
	I0314 19:30:16.920618  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:16.920686  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.926133  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:16.926193  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:16.972150  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:16.972184  991880 cri.go:89] found id: ""
	I0314 19:30:16.972192  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:16.972276  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.979169  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:16.979247  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:17.028161  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:17.028191  991880 cri.go:89] found id: ""
	I0314 19:30:17.028202  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:17.028290  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.034573  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:17.034644  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:17.081031  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.081057  991880 cri.go:89] found id: ""
	I0314 19:30:17.081067  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:17.081132  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.086182  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:17.086254  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:17.124758  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.124793  991880 cri.go:89] found id: ""
	I0314 19:30:17.124804  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:17.124892  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.130576  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:17.130636  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:17.180055  991880 cri.go:89] found id: ""
	I0314 19:30:17.180088  991880 logs.go:276] 0 containers: []
	W0314 19:30:17.180100  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:17.180107  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:17.180174  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:17.227751  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.227785  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.227790  991880 cri.go:89] found id: ""
	I0314 19:30:17.227800  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:17.227859  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.232614  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.237357  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:17.237385  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.300841  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:17.300884  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.363775  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:17.363812  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.419276  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:17.419328  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.461722  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:17.461764  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:17.519105  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:17.519147  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:17.535065  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:17.535099  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:17.588603  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:17.588642  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:17.641770  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:17.641803  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:18.180497  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:18.180561  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:18.250700  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:18.250736  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:18.422627  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:18.422668  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:18.484021  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:18.484059  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.765051  992563 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:30:21.765146  992563 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:30:21.765261  992563 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:30:21.765420  992563 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:30:21.765550  992563 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:30:21.765636  992563 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:30:21.767199  992563 out.go:204]   - Generating certificates and keys ...
	I0314 19:30:21.767291  992563 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:30:21.767371  992563 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:30:21.767473  992563 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:30:21.767548  992563 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:30:21.767636  992563 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:30:21.767703  992563 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:30:21.767787  992563 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:30:21.767864  992563 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:30:21.767957  992563 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:30:21.768051  992563 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:30:21.768098  992563 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:30:21.768170  992563 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:30:21.768260  992563 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:30:21.768327  992563 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:30:21.768407  992563 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:30:21.768487  992563 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:30:21.768592  992563 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:30:21.768685  992563 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:30:21.769989  992563 out.go:204]   - Booting up control plane ...
	I0314 19:30:21.770111  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:30:21.770213  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:30:21.770295  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:30:21.770428  992563 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:30:21.770580  992563 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:30:21.770654  992563 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:30:21.770844  992563 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:30:21.770958  992563 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503303 seconds
	I0314 19:30:21.771087  992563 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:30:21.771238  992563 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:30:21.771320  992563 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:30:21.771547  992563 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-440341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:30:21.771634  992563 kubeadm.go:309] [bootstrap-token] Using token: tk83yg.fwvaicx2eo9i68ac
	I0314 19:30:21.773288  992563 out.go:204]   - Configuring RBAC rules ...
	I0314 19:30:21.773428  992563 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:30:21.773532  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:30:21.773732  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:30:21.773914  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:30:21.774068  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:30:21.774180  992563 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:30:21.774338  992563 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:30:21.774402  992563 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:30:21.774464  992563 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:30:21.774472  992563 kubeadm.go:309] 
	I0314 19:30:21.774577  992563 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:30:21.774601  992563 kubeadm.go:309] 
	I0314 19:30:21.774705  992563 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:30:21.774715  992563 kubeadm.go:309] 
	I0314 19:30:21.774744  992563 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:30:21.774833  992563 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:30:21.774914  992563 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:30:21.774930  992563 kubeadm.go:309] 
	I0314 19:30:21.775008  992563 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:30:21.775033  992563 kubeadm.go:309] 
	I0314 19:30:21.775102  992563 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:30:21.775112  992563 kubeadm.go:309] 
	I0314 19:30:21.775191  992563 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:30:21.775311  992563 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:30:21.775407  992563 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:30:21.775416  992563 kubeadm.go:309] 
	I0314 19:30:21.775537  992563 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:30:21.775654  992563 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:30:21.775664  992563 kubeadm.go:309] 
	I0314 19:30:21.775774  992563 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.775940  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:30:21.775971  992563 kubeadm.go:309] 	--control-plane 
	I0314 19:30:21.775977  992563 kubeadm.go:309] 
	I0314 19:30:21.776088  992563 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:30:21.776096  992563 kubeadm.go:309] 
	I0314 19:30:21.776235  992563 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.776419  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:30:21.776441  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:30:21.776451  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:30:21.778042  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:30:21.037583  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:21.055977  991880 api_server.go:72] duration metric: took 4m16.560286182s to wait for apiserver process to appear ...
	I0314 19:30:21.056002  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:21.056039  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:21.056088  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:21.101556  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:21.101583  991880 cri.go:89] found id: ""
	I0314 19:30:21.101591  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:21.101640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.107192  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:21.107259  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:21.156580  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:21.156608  991880 cri.go:89] found id: ""
	I0314 19:30:21.156619  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:21.156681  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.162119  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:21.162277  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:21.204270  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:21.204295  991880 cri.go:89] found id: ""
	I0314 19:30:21.204304  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:21.204369  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.208987  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:21.209057  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:21.258998  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.259019  991880 cri.go:89] found id: ""
	I0314 19:30:21.259029  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:21.259094  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.264179  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:21.264264  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:21.314180  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.314213  991880 cri.go:89] found id: ""
	I0314 19:30:21.314225  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:21.314293  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.319693  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:21.319758  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:21.364936  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.364974  991880 cri.go:89] found id: ""
	I0314 19:30:21.364987  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:21.365061  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.370463  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:21.370531  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:21.411930  991880 cri.go:89] found id: ""
	I0314 19:30:21.411963  991880 logs.go:276] 0 containers: []
	W0314 19:30:21.411974  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:21.411982  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:21.412053  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:21.467849  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:21.467875  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.467881  991880 cri.go:89] found id: ""
	I0314 19:30:21.467891  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:21.467954  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.474463  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.480322  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:21.480351  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.532746  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:21.532778  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.599065  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:21.599115  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.655522  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:21.655563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:22.097480  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:22.097521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:22.154520  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:22.154563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:22.175274  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:22.175312  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:22.302831  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:22.302865  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:22.353974  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:22.354017  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:22.392220  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:22.392263  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:22.433863  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:22.433893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:22.474014  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:22.474047  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:22.522023  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:22.522056  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:21.779300  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:30:21.841175  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:30:21.937053  992563 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:30:21.937114  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:21.937131  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-440341 minikube.k8s.io/updated_at=2024_03_14T19_30_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=default-k8s-diff-port-440341 minikube.k8s.io/primary=true
	I0314 19:30:22.169862  992563 ops.go:34] apiserver oom_adj: -16
	I0314 19:30:22.169890  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:22.670591  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.170361  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.670786  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.170313  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.670779  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.169961  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.670821  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:26.170263  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.081288  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:30:25.086127  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:30:25.087542  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:30:25.087569  991880 api_server.go:131] duration metric: took 4.031556019s to wait for apiserver health ...
	I0314 19:30:25.087578  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:25.087598  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:25.087646  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:25.136716  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.136743  991880 cri.go:89] found id: ""
	I0314 19:30:25.136754  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:25.136818  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.142319  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:25.142382  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:25.188007  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.188031  991880 cri.go:89] found id: ""
	I0314 19:30:25.188040  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:25.188098  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.192982  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:25.193056  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:25.235462  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.235485  991880 cri.go:89] found id: ""
	I0314 19:30:25.235493  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:25.235543  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.239980  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:25.240048  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:25.288524  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.288545  991880 cri.go:89] found id: ""
	I0314 19:30:25.288554  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:25.288604  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.294625  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:25.294680  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:25.332862  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:25.332884  991880 cri.go:89] found id: ""
	I0314 19:30:25.332891  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:25.332949  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.337918  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:25.337993  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:25.379537  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:25.379569  991880 cri.go:89] found id: ""
	I0314 19:30:25.379578  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:25.379640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.385396  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:25.385471  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:25.426549  991880 cri.go:89] found id: ""
	I0314 19:30:25.426584  991880 logs.go:276] 0 containers: []
	W0314 19:30:25.426596  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:25.426603  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:25.426676  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:25.468021  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:25.468048  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.468054  991880 cri.go:89] found id: ""
	I0314 19:30:25.468065  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:25.468134  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.473277  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.477669  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:25.477690  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:25.530477  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:25.530521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.586949  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:25.586985  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.629933  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:25.629972  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.675919  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:25.675955  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.724439  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:25.724477  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:25.790827  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:25.790864  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:25.808176  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:25.808223  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:25.925583  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:25.925621  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.972184  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:25.972237  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:26.018051  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:26.018083  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:26.080100  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:26.080141  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:26.117235  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:26.117276  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:29.005596  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:30:29.005628  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.005632  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.005636  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.005639  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.005642  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.005645  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.005651  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.005657  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.005665  991880 system_pods.go:74] duration metric: took 3.918081505s to wait for pod list to return data ...
	I0314 19:30:29.005672  991880 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:29.008145  991880 default_sa.go:45] found service account: "default"
	I0314 19:30:29.008172  991880 default_sa.go:55] duration metric: took 2.493629ms for default service account to be created ...
	I0314 19:30:29.008181  991880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:29.013603  991880 system_pods.go:86] 8 kube-system pods found
	I0314 19:30:29.013629  991880 system_pods.go:89] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.013641  991880 system_pods.go:89] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.013646  991880 system_pods.go:89] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.013650  991880 system_pods.go:89] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.013654  991880 system_pods.go:89] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.013658  991880 system_pods.go:89] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.013665  991880 system_pods.go:89] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.013673  991880 system_pods.go:89] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.013683  991880 system_pods.go:126] duration metric: took 5.49627ms to wait for k8s-apps to be running ...
	I0314 19:30:29.013692  991880 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:29.013744  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:29.033211  991880 system_svc.go:56] duration metric: took 19.509127ms WaitForService to wait for kubelet
	I0314 19:30:29.033244  991880 kubeadm.go:576] duration metric: took 4m24.537554048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:29.033262  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:29.036387  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:29.036409  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:29.036419  991880 node_conditions.go:105] duration metric: took 3.152496ms to run NodePressure ...
	I0314 19:30:29.036432  991880 start.go:240] waiting for startup goroutines ...
	I0314 19:30:29.036441  991880 start.go:245] waiting for cluster config update ...
	I0314 19:30:29.036455  991880 start.go:254] writing updated cluster config ...
	I0314 19:30:29.036755  991880 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:29.086638  991880 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 19:30:29.088767  991880 out.go:177] * Done! kubectl is now configured to use "no-preload-731976" cluster and "default" namespace by default
	I0314 19:30:26.670634  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.170774  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.670460  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.170571  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.670199  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.170324  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.670849  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.170021  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.670974  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.170929  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.670790  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.170127  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.670598  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.170188  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.670057  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.800524  992563 kubeadm.go:1106] duration metric: took 11.863480183s to wait for elevateKubeSystemPrivileges
	W0314 19:30:33.800567  992563 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:30:33.800577  992563 kubeadm.go:393] duration metric: took 5m9.94050972s to StartCluster
	I0314 19:30:33.800600  992563 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.800688  992563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:30:33.802311  992563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.802593  992563 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:30:33.804369  992563 out.go:177] * Verifying Kubernetes components...
	I0314 19:30:33.802658  992563 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:30:33.802827  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:30:33.806017  992563 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806030  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:30:33.806039  992563 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806035  992563 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806064  992563 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-440341"
	I0314 19:30:33.806070  992563 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.806078  992563 addons.go:243] addon storage-provisioner should already be in state true
	W0314 19:30:33.806079  992563 addons.go:243] addon metrics-server should already be in state true
	I0314 19:30:33.806077  992563 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-440341"
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806494  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806502  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806518  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806535  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806588  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806621  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.822764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0314 19:30:33.823247  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.823804  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.823832  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.824297  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.824872  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.824921  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.826625  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0314 19:30:33.826764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0314 19:30:33.827172  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827235  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827776  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827802  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.827915  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827936  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.828247  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.828442  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.829152  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.829934  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.829979  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.832093  992563 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.832117  992563 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:30:33.832150  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.832523  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.832567  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.847051  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I0314 19:30:33.847665  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.848345  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.848364  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.848731  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.848903  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.850545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.852435  992563 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:30:33.851181  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I0314 19:30:33.852606  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44445
	I0314 19:30:33.853975  992563 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:33.853991  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:30:33.854010  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.852808  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854344  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854857  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.854878  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855189  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.855207  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855286  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855613  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855925  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.856281  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.856302  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.857423  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.857734  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.859391  992563 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:30:33.858135  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.858383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.860539  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:30:33.860554  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:30:33.860561  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.860569  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.860651  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.860790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.860935  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.862967  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863319  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.863339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863428  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.863627  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.863738  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.863908  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.880826  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I0314 19:30:33.881345  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.881799  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.881818  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.882187  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.882341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.884263  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.884589  992563 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:33.884607  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:30:33.884625  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.887557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.887921  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.887945  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.888190  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.888503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.888670  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.888773  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:34.034473  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:30:34.090558  992563 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129103  992563 node_ready.go:49] node "default-k8s-diff-port-440341" has status "Ready":"True"
	I0314 19:30:34.129135  992563 node_ready.go:38] duration metric: took 38.535795ms for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129148  992563 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:34.137612  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:34.186085  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:30:34.186105  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:30:34.218932  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:34.220858  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:30:34.220881  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:30:34.235535  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:34.356161  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:34.356196  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:30:34.486555  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:36.162952  992563 pod_ready.go:92] pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.162991  992563 pod_ready.go:81] duration metric: took 2.025345367s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.163005  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171143  992563 pod_ready.go:92] pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.171227  992563 pod_ready.go:81] duration metric: took 8.211826ms for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171254  992563 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182222  992563 pod_ready.go:92] pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.182246  992563 pod_ready.go:81] duration metric: took 10.963779ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182255  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196349  992563 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.196375  992563 pod_ready.go:81] duration metric: took 14.113911ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196385  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201427  992563 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.201448  992563 pod_ready.go:81] duration metric: took 5.056279ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201456  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.470967  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.235390903s)
	I0314 19:30:36.471092  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471113  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471179  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.252178888s)
	I0314 19:30:36.471229  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471250  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471529  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471565  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471565  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471576  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471583  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471605  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471626  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471589  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471854  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471876  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.472161  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.472167  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.472186  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.491529  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.491557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.491867  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.491887  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.546393  992563 pod_ready.go:92] pod "kube-proxy-h7hdc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.546418  992563 pod_ready.go:81] duration metric: took 344.955471ms for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.546427  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.619091  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.132488028s)
	I0314 19:30:36.619147  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619165  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619443  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619459  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619469  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619477  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619809  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619839  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619847  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.619851  992563 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-440341"
	I0314 19:30:36.621595  992563 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 19:30:36.622935  992563 addons.go:505] duration metric: took 2.820276683s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 19:30:36.950079  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.950112  992563 pod_ready.go:81] duration metric: took 403.67651ms for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.950124  992563 pod_ready.go:38] duration metric: took 2.820962547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:36.950145  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:36.950212  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:37.024696  992563 api_server.go:72] duration metric: took 3.222061457s to wait for apiserver process to appear ...
	I0314 19:30:37.024728  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:37.024754  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:30:37.031369  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:30:37.033114  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:30:37.033137  992563 api_server.go:131] duration metric: took 8.40225ms to wait for apiserver health ...
	I0314 19:30:37.033145  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:37.157219  992563 system_pods.go:59] 9 kube-system pods found
	I0314 19:30:37.157256  992563 system_pods.go:61] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.157263  992563 system_pods.go:61] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.157269  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.157276  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.157282  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.157286  992563 system_pods.go:61] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.157291  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.157300  992563 system_pods.go:61] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.157308  992563 system_pods.go:61] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.157323  992563 system_pods.go:74] duration metric: took 124.170301ms to wait for pod list to return data ...
	I0314 19:30:37.157336  992563 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:37.343573  992563 default_sa.go:45] found service account: "default"
	I0314 19:30:37.343602  992563 default_sa.go:55] duration metric: took 186.253477ms for default service account to be created ...
	I0314 19:30:37.343620  992563 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:37.549907  992563 system_pods.go:86] 9 kube-system pods found
	I0314 19:30:37.549947  992563 system_pods.go:89] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.549955  992563 system_pods.go:89] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.549962  992563 system_pods.go:89] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.549969  992563 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.549977  992563 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.549982  992563 system_pods.go:89] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.549987  992563 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.549998  992563 system_pods.go:89] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.550007  992563 system_pods.go:89] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.550022  992563 system_pods.go:126] duration metric: took 206.393584ms to wait for k8s-apps to be running ...
	I0314 19:30:37.550039  992563 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:37.550098  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:37.568337  992563 system_svc.go:56] duration metric: took 18.290339ms WaitForService to wait for kubelet
	I0314 19:30:37.568369  992563 kubeadm.go:576] duration metric: took 3.765742034s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:37.568396  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:37.747892  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:37.747931  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:37.747945  992563 node_conditions.go:105] duration metric: took 179.543321ms to run NodePressure ...
	I0314 19:30:37.747959  992563 start.go:240] waiting for startup goroutines ...
	I0314 19:30:37.747969  992563 start.go:245] waiting for cluster config update ...
	I0314 19:30:37.747984  992563 start.go:254] writing updated cluster config ...
	I0314 19:30:37.748310  992563 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:37.800491  992563 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:30:37.802410  992563 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-440341" cluster and "default" namespace by default
	I0314 19:31:02.414037  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:31:02.414153  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:31:02.415801  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:02.415891  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:02.415997  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:02.416110  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:02.416236  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:02.416324  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:02.418205  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:02.418304  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:02.418377  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:02.418455  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:02.418519  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:02.418629  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:02.418704  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:02.418793  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:02.418892  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:02.419018  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:02.419129  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:02.419184  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:02.419270  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:02.419347  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:02.419421  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:02.419528  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:02.419624  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:02.419808  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:02.419914  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:02.419951  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:02.420007  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:02.421520  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:02.421603  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:02.421669  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:02.421753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:02.421844  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:02.422023  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:02.422092  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:02.422167  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422353  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422458  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422731  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422812  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422970  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423032  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423228  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423333  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423479  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423488  992344 kubeadm.go:309] 
	I0314 19:31:02.423519  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:31:02.423552  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:31:02.423558  992344 kubeadm.go:309] 
	I0314 19:31:02.423601  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:31:02.423643  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:31:02.423770  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:31:02.423780  992344 kubeadm.go:309] 
	I0314 19:31:02.423912  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:31:02.423949  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:31:02.424001  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:31:02.424012  992344 kubeadm.go:309] 
	I0314 19:31:02.424141  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:31:02.424269  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:31:02.424280  992344 kubeadm.go:309] 
	I0314 19:31:02.424405  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:31:02.424481  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:31:02.424542  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:31:02.424606  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:31:02.424638  992344 kubeadm.go:309] 
	W0314 19:31:02.424800  992344 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 19:31:02.424887  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:31:03.827325  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.402406647s)
	I0314 19:31:03.827421  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:31:03.845125  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:31:03.856796  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:31:03.856821  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:31:03.856875  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:31:03.868304  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:31:03.868359  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:31:03.879608  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:31:03.891002  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:31:03.891068  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:31:03.902543  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.913159  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:31:03.913212  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.926194  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:31:03.937276  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:31:03.937344  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:31:03.949719  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:31:04.026772  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:04.026841  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:04.195658  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:04.195816  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:04.195973  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:04.416776  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:04.418845  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:04.418937  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:04.419023  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:04.419125  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:04.419222  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:04.419321  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:04.419386  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:04.419869  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:04.420376  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:04.420786  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:04.421265  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:04.421447  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:04.421551  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:04.472916  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:04.572160  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:04.802131  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:04.892115  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:04.908810  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:04.910191  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:04.910266  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:05.076124  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:05.078423  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:05.078564  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:05.083626  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:05.083753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:05.084096  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:05.088164  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:45.090977  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:45.091099  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:45.091378  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:50.091571  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:50.091787  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:00.093031  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:00.093312  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:20.094443  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:20.094650  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096632  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:33:00.096929  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096948  992344 kubeadm.go:309] 
	I0314 19:33:00.096986  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:33:00.097021  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:33:00.097030  992344 kubeadm.go:309] 
	I0314 19:33:00.097059  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:33:00.097088  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:33:00.097203  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:33:00.097228  992344 kubeadm.go:309] 
	I0314 19:33:00.097345  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:33:00.097394  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:33:00.097451  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:33:00.097461  992344 kubeadm.go:309] 
	I0314 19:33:00.097572  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:33:00.097673  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:33:00.097685  992344 kubeadm.go:309] 
	I0314 19:33:00.097865  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:33:00.098003  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:33:00.098105  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:33:00.098202  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:33:00.098221  992344 kubeadm.go:309] 
	I0314 19:33:00.098939  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:33:00.099069  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:33:00.099160  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:33:00.099254  992344 kubeadm.go:393] duration metric: took 7m59.845612375s to StartCluster
	I0314 19:33:00.099339  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:33:00.099422  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:33:00.151833  992344 cri.go:89] found id: ""
	I0314 19:33:00.151861  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.151869  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:33:00.151876  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:33:00.151943  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:33:00.196473  992344 cri.go:89] found id: ""
	I0314 19:33:00.196508  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.196519  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:33:00.196526  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:33:00.196595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:33:00.233150  992344 cri.go:89] found id: ""
	I0314 19:33:00.233193  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.233207  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:33:00.233217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:33:00.233292  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:33:00.273142  992344 cri.go:89] found id: ""
	I0314 19:33:00.273183  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.273196  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:33:00.273205  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:33:00.273274  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:33:00.311472  992344 cri.go:89] found id: ""
	I0314 19:33:00.311510  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.311523  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:33:00.311544  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:33:00.311618  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:33:00.352110  992344 cri.go:89] found id: ""
	I0314 19:33:00.352138  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.352146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:33:00.352152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:33:00.352230  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:33:00.399016  992344 cri.go:89] found id: ""
	I0314 19:33:00.399050  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.399060  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:33:00.399068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:33:00.399140  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:33:00.436808  992344 cri.go:89] found id: ""
	I0314 19:33:00.436844  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.436857  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:33:00.436871  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:33:00.436889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:33:00.487696  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:33:00.487732  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:33:00.503591  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:33:00.503624  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:33:00.586980  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:33:00.587014  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:33:00.587033  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:33:00.697747  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:33:00.697805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 19:33:00.767728  992344 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 19:33:00.767799  992344 out.go:239] * 
	W0314 19:33:00.768013  992344 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.768052  992344 out.go:239] * 
	W0314 19:33:00.769333  992344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:33:00.772897  992344 out.go:177] 
	W0314 19:33:00.774102  992344 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.774165  992344 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 19:33:00.774200  992344 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 19:33:00.775839  992344 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.170844698Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445171170823483,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5d5ebc06-3549-4767-b18c-b7146addc597 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.171392381Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b801b637-60a0-4d55-8b46-7ea5640e3990 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.171438113Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b801b637-60a0-4d55-8b46-7ea5640e3990 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.171878187Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444392979977448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b313,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361b64b9fbb1cf9e05383cc0962bd15479955aeafb9108888b84a9d0f8c3c92c,PodSandboxId:c90dc3736e64c0718609f8a0b208c8b3e1ef1861f5dea0656e9b0742eb25d5d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710444370797668471,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03b44efa-57c3-4ad4-869b-a23129e8aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7c2841,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13,PodSandboxId:0772f8e21adbbeac47b0f4853eb598d49fc6d6757a5fc9267dd3ebae39c8ee6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710444369801389028,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mcddh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d78c0561-04ac-4899-8a97-f3a04a1fa830,},Annotations:map[string]string{io.kubernetes.container.hash: 16798f6a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860,PodSandboxId:35e6ab99eeae6e3c2852a8734b0ed7180e4994120daecbf08f613c14996aae72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710444362180004678,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkn7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f519f9-13fd-4e04-ac
0c-c9ad8ee67cf9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0dad6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710444362132144186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b3
13,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89,PodSandboxId:53aef9fd1c6e1291e17fee7f65dd86af560857754320fa8ed4be439f329aabde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710444358486616542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30497f2d06edea48149ce57893e1893d,},Annotations:map[string]string{io.kuber
netes.container.hash: bff329bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45,PodSandboxId:ab4a8b629061d3d88950a0aa62671a30d11dd847887dfb9d15d950960fa604df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710444358422996769,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669737fe4b8bab7cba37920e4eb513d0,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 81b061a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0,PodSandboxId:32c456183e262219b7e9de43d89c6a2872222f862a30a8c10fc446a45174e41c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710444358414171223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af4fb56aaf2dd025068b4aa6814d5c0,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55,PodSandboxId:fb759e322f09b365cd7610fbbb7a76b9b14623054665795cfde07759b614dd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710444358425454703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e3e9be82e0c3e36d0567e37ffdff62,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b801b637-60a0-4d55-8b46-7ea5640e3990 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.216391074Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=12bb933f-970e-41ca-bfca-14a22993fef3 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.216500816Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=12bb933f-970e-41ca-bfca-14a22993fef3 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.217519709Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=195f8c39-4740-44da-9a7c-5fc4183466ec name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.217851942Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445171217833112,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=195f8c39-4740-44da-9a7c-5fc4183466ec name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.218525506Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c32cab0d-a95e-4772-afdf-44ab45fd01a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.218575283Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c32cab0d-a95e-4772-afdf-44ab45fd01a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.218785786Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444392979977448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b313,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361b64b9fbb1cf9e05383cc0962bd15479955aeafb9108888b84a9d0f8c3c92c,PodSandboxId:c90dc3736e64c0718609f8a0b208c8b3e1ef1861f5dea0656e9b0742eb25d5d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710444370797668471,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03b44efa-57c3-4ad4-869b-a23129e8aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7c2841,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13,PodSandboxId:0772f8e21adbbeac47b0f4853eb598d49fc6d6757a5fc9267dd3ebae39c8ee6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710444369801389028,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mcddh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d78c0561-04ac-4899-8a97-f3a04a1fa830,},Annotations:map[string]string{io.kubernetes.container.hash: 16798f6a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860,PodSandboxId:35e6ab99eeae6e3c2852a8734b0ed7180e4994120daecbf08f613c14996aae72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710444362180004678,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkn7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f519f9-13fd-4e04-ac
0c-c9ad8ee67cf9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0dad6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710444362132144186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b3
13,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89,PodSandboxId:53aef9fd1c6e1291e17fee7f65dd86af560857754320fa8ed4be439f329aabde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710444358486616542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30497f2d06edea48149ce57893e1893d,},Annotations:map[string]string{io.kuber
netes.container.hash: bff329bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45,PodSandboxId:ab4a8b629061d3d88950a0aa62671a30d11dd847887dfb9d15d950960fa604df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710444358422996769,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669737fe4b8bab7cba37920e4eb513d0,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 81b061a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0,PodSandboxId:32c456183e262219b7e9de43d89c6a2872222f862a30a8c10fc446a45174e41c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710444358414171223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af4fb56aaf2dd025068b4aa6814d5c0,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55,PodSandboxId:fb759e322f09b365cd7610fbbb7a76b9b14623054665795cfde07759b614dd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710444358425454703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e3e9be82e0c3e36d0567e37ffdff62,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c32cab0d-a95e-4772-afdf-44ab45fd01a0 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.262676154Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=93b55bdc-4afb-4947-bae1-9fae4a35a644 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.262744566Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=93b55bdc-4afb-4947-bae1-9fae4a35a644 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.264277117Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=38310680-3b5b-4475-8519-84987a154dd4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.264642332Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445171264622459,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=38310680-3b5b-4475-8519-84987a154dd4 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.265204543Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=c3e595b9-713f-4b51-a7f1-51a323ebcb48 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.265254959Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=c3e595b9-713f-4b51-a7f1-51a323ebcb48 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.265464752Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444392979977448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b313,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361b64b9fbb1cf9e05383cc0962bd15479955aeafb9108888b84a9d0f8c3c92c,PodSandboxId:c90dc3736e64c0718609f8a0b208c8b3e1ef1861f5dea0656e9b0742eb25d5d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710444370797668471,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03b44efa-57c3-4ad4-869b-a23129e8aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7c2841,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13,PodSandboxId:0772f8e21adbbeac47b0f4853eb598d49fc6d6757a5fc9267dd3ebae39c8ee6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710444369801389028,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mcddh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d78c0561-04ac-4899-8a97-f3a04a1fa830,},Annotations:map[string]string{io.kubernetes.container.hash: 16798f6a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860,PodSandboxId:35e6ab99eeae6e3c2852a8734b0ed7180e4994120daecbf08f613c14996aae72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710444362180004678,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkn7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f519f9-13fd-4e04-ac
0c-c9ad8ee67cf9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0dad6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710444362132144186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b3
13,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89,PodSandboxId:53aef9fd1c6e1291e17fee7f65dd86af560857754320fa8ed4be439f329aabde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710444358486616542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30497f2d06edea48149ce57893e1893d,},Annotations:map[string]string{io.kuber
netes.container.hash: bff329bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45,PodSandboxId:ab4a8b629061d3d88950a0aa62671a30d11dd847887dfb9d15d950960fa604df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710444358422996769,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669737fe4b8bab7cba37920e4eb513d0,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 81b061a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0,PodSandboxId:32c456183e262219b7e9de43d89c6a2872222f862a30a8c10fc446a45174e41c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710444358414171223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af4fb56aaf2dd025068b4aa6814d5c0,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55,PodSandboxId:fb759e322f09b365cd7610fbbb7a76b9b14623054665795cfde07759b614dd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710444358425454703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e3e9be82e0c3e36d0567e37ffdff62,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=c3e595b9-713f-4b51-a7f1-51a323ebcb48 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.300655819Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3cf0c8cd-819b-4d42-adeb-83566473ace5 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.301147024Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3cf0c8cd-819b-4d42-adeb-83566473ace5 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.304504211Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=959f6985-6f72-4786-a723-fd4a704fda41 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.304958221Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445171304930262,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=959f6985-6f72-4786-a723-fd4a704fda41 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.306306626Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f9c31363-2e12-4211-b9c5-95b55f167a48 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.306424986Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f9c31363-2e12-4211-b9c5-95b55f167a48 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:31 no-preload-731976 crio[692]: time="2024-03-14 19:39:31.306696908Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444392979977448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b313,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361b64b9fbb1cf9e05383cc0962bd15479955aeafb9108888b84a9d0f8c3c92c,PodSandboxId:c90dc3736e64c0718609f8a0b208c8b3e1ef1861f5dea0656e9b0742eb25d5d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710444370797668471,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03b44efa-57c3-4ad4-869b-a23129e8aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7c2841,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13,PodSandboxId:0772f8e21adbbeac47b0f4853eb598d49fc6d6757a5fc9267dd3ebae39c8ee6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710444369801389028,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mcddh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d78c0561-04ac-4899-8a97-f3a04a1fa830,},Annotations:map[string]string{io.kubernetes.container.hash: 16798f6a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860,PodSandboxId:35e6ab99eeae6e3c2852a8734b0ed7180e4994120daecbf08f613c14996aae72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710444362180004678,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkn7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f519f9-13fd-4e04-ac
0c-c9ad8ee67cf9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0dad6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710444362132144186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b3
13,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89,PodSandboxId:53aef9fd1c6e1291e17fee7f65dd86af560857754320fa8ed4be439f329aabde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710444358486616542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30497f2d06edea48149ce57893e1893d,},Annotations:map[string]string{io.kuber
netes.container.hash: bff329bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45,PodSandboxId:ab4a8b629061d3d88950a0aa62671a30d11dd847887dfb9d15d950960fa604df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710444358422996769,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669737fe4b8bab7cba37920e4eb513d0,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 81b061a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0,PodSandboxId:32c456183e262219b7e9de43d89c6a2872222f862a30a8c10fc446a45174e41c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710444358414171223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af4fb56aaf2dd025068b4aa6814d5c0,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55,PodSandboxId:fb759e322f09b365cd7610fbbb7a76b9b14623054665795cfde07759b614dd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710444358425454703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e3e9be82e0c3e36d0567e37ffdff62,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f9c31363-2e12-4211-b9c5-95b55f167a48 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aeed99a1392ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      12 minutes ago      Running             storage-provisioner       2                   df59d3e5bdf72       storage-provisioner
	361b64b9fbb1c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   13 minutes ago      Running             busybox                   1                   c90dc3736e64c       busybox
	ec0841c5bdfb8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      13 minutes ago      Running             coredns                   1                   0772f8e21adbb       coredns-76f75df574-mcddh
	3a8800127b849       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      13 minutes ago      Running             kube-proxy                1                   35e6ab99eeae6       kube-proxy-fkn7b
	27e79a384706c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      13 minutes ago      Exited              storage-provisioner       1                   df59d3e5bdf72       storage-provisioner
	db597de214816       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      13 minutes ago      Running             etcd                      1                   53aef9fd1c6e1       etcd-no-preload-731976
	5b8e529f94562       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      13 minutes ago      Running             kube-scheduler            1                   fb759e322f09b       kube-scheduler-no-preload-731976
	a09531e613ae5       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      13 minutes ago      Running             kube-apiserver            1                   ab4a8b629061d       kube-apiserver-no-preload-731976
	9151eb0c1b33c       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      13 minutes ago      Running             kube-controller-manager   1                   32c456183e262       kube-controller-manager-no-preload-731976
	
	
	==> coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48549 - 19461 "HINFO IN 4591029028017746442.7322755858764164589. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029339935s
	
	
	==> describe nodes <==
	Name:               no-preload-731976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-731976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=no-preload-731976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T19_15_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:15:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-731976
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:39:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:36:44 +0000   Thu, 14 Mar 2024 19:15:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:36:44 +0000   Thu, 14 Mar 2024 19:15:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:36:44 +0000   Thu, 14 Mar 2024 19:15:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:36:44 +0000   Thu, 14 Mar 2024 19:26:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    no-preload-731976
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca284e76bb3247e9805a6c098215c046
	  System UUID:                ca284e76-bb32-47e9-805a-6c098215c046
	  Boot ID:                    ec12a515-8905-41c1-8a1d-83d8375cab5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         22m
	  kube-system                 coredns-76f75df574-mcddh                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     23m
	  kube-system                 etcd-no-preload-731976                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         23m
	  kube-system                 kube-apiserver-no-preload-731976             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-controller-manager-no-preload-731976    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-proxy-fkn7b                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 kube-scheduler-no-preload-731976             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	  kube-system                 metrics-server-57f55c9bc5-rhg5r              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         22m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         23m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 23m                kube-proxy       
	  Normal  Starting                 13m                kube-proxy       
	  Normal  NodeAllocatableEnforced  23m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  23m                kubelet          Node no-preload-731976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    23m                kubelet          Node no-preload-731976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     23m                kubelet          Node no-preload-731976 status is now: NodeHasSufficientPID
	  Normal  NodeReady                23m                kubelet          Node no-preload-731976 status is now: NodeReady
	  Normal  Starting                 23m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           23m                node-controller  Node no-preload-731976 event: Registered Node no-preload-731976 in Controller
	  Normal  Starting                 13m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  13m (x8 over 13m)  kubelet          Node no-preload-731976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    13m (x8 over 13m)  kubelet          Node no-preload-731976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     13m (x7 over 13m)  kubelet          Node no-preload-731976 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  13m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           13m                node-controller  Node no-preload-731976 event: Registered Node no-preload-731976 in Controller
	
	
	==> dmesg <==
	[Mar14 19:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056178] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047988] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.870385] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.687564] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.739459] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.516277] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.072805] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080686] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.173767] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.142530] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.262325] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[ +17.382411] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.067518] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.182667] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[Mar14 19:26] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.508221] systemd-fstab-generator[1916]: Ignoring "noauto" option for root device
	[  +4.197170] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.268522] kauditd_printk_skb: 30 callbacks suppressed
	
	
	==> etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] <==
	{"level":"info","ts":"2024-03-14T19:25:59.107663Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/snap","suffix":"snap","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T19:25:59.107697Z","caller":"fileutil/purge.go:44","msg":"started to purge file","dir":"/var/lib/minikube/etcd/member/wal","suffix":"wal","max":5,"interval":"30s"}
	{"level":"info","ts":"2024-03-14T19:25:59.110233Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.39.148:2380"}
	{"level":"info","ts":"2024-03-14T19:25:59.110316Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.148:2380"}
	{"level":"info","ts":"2024-03-14T19:25:59.111264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa switched to configuration voters=(10675872347264799914)"}
	{"level":"info","ts":"2024-03-14T19:25:59.111336Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8bee001f44ea94","local-member-id":"942851562e2254aa","added-peer-id":"942851562e2254aa","added-peer-peer-urls":["https://192.168.39.148:2380"]}
	{"level":"info","ts":"2024-03-14T19:25:59.111445Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8bee001f44ea94","local-member-id":"942851562e2254aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:25:59.111516Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:26:00.138816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T19:26:00.138925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T19:26:00.138972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa received MsgPreVoteResp from 942851562e2254aa at term 2"}
	{"level":"info","ts":"2024-03-14T19:26:00.139002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T19:26:00.139111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa received MsgVoteResp from 942851562e2254aa at term 3"}
	{"level":"info","ts":"2024-03-14T19:26:00.139157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa became leader at term 3"}
	{"level":"info","ts":"2024-03-14T19:26:00.139183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 942851562e2254aa elected leader 942851562e2254aa at term 3"}
	{"level":"info","ts":"2024-03-14T19:26:00.140819Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"942851562e2254aa","local-member-attributes":"{Name:no-preload-731976 ClientURLs:[https://192.168.39.148:2379]}","request-path":"/0/members/942851562e2254aa/attributes","cluster-id":"8bee001f44ea94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T19:26:00.140867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:26:00.141396Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:26:00.141688Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T19:26:00.141728Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T19:26:00.1433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T19:26:00.143465Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.148:2379"}
	{"level":"info","ts":"2024-03-14T19:36:00.171598Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":846}
	{"level":"info","ts":"2024-03-14T19:36:00.175443Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":846,"took":"3.506881ms","hash":1364715535}
	{"level":"info","ts":"2024-03-14T19:36:00.17559Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1364715535,"revision":846,"compact-revision":-1}
	
	
	==> kernel <==
	 19:39:31 up 14 min,  0 users,  load average: 0.08, 0.16, 0.13
	Linux no-preload-731976 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] <==
	I0314 19:34:02.676991       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:36:01.679540       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:36:01.679860       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0314 19:36:02.680468       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:36:02.680614       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:36:02.680662       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:36:02.680504       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:36:02.680788       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:36:02.682801       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:37:02.681761       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:37:02.682123       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:37:02.682176       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:37:02.683164       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:37:02.683283       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:37:02.683319       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:39:02.683317       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:39:02.683405       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:39:02.683415       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:39:02.683487       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:39:02.683534       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:39:02.684497       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] <==
	I0314 19:33:44.619555       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:34:14.230271       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:34:14.627744       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:34:44.239182       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:34:44.636519       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:35:14.245134       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:35:14.645133       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:35:44.250790       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:35:44.653451       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:36:14.258167       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:36:14.662463       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:36:44.264108       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:36:44.672119       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:37:14.271260       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:37:14.679167       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 19:37:18.767481       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="1.473999ms"
	I0314 19:37:31.770851       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="202.358µs"
	E0314 19:37:44.277571       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:37:44.689193       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:38:14.284216       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:38:14.698230       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:38:44.289672       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:38:44.707891       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:39:14.294744       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:39:14.716413       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] <==
	I0314 19:26:02.341212       1 server_others.go:72] "Using iptables proxy"
	I0314 19:26:02.352720       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.148"]
	I0314 19:26:02.396641       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0314 19:26:02.396658       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:26:02.396668       1 server_others.go:168] "Using iptables Proxier"
	I0314 19:26:02.400441       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:26:02.400745       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0314 19:26:02.400759       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:26:02.401977       1 config.go:188] "Starting service config controller"
	I0314 19:26:02.402132       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:26:02.402185       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:26:02.402211       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:26:02.406423       1 config.go:315] "Starting node config controller"
	I0314 19:26:02.406481       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:26:02.503253       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:26:02.503333       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:26:02.506579       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] <==
	I0314 19:25:59.374195       1 serving.go:380] Generated self-signed cert in-memory
	W0314 19:26:01.604735       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 19:26:01.604792       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 19:26:01.604812       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 19:26:01.604818       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:26:01.637783       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0314 19:26:01.637853       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:26:01.642580       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:26:01.642741       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:26:01.642781       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:26:01.642797       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:26:01.743422       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 19:37:07 no-preload-731976 kubelet[1323]: E0314 19:37:07.770699    1323 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 19:37:07 no-preload-731976 kubelet[1323]: E0314 19:37:07.770770    1323 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 19:37:07 no-preload-731976 kubelet[1323]: E0314 19:37:07.770977    1323 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-skkwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-rhg5r_kube-system(5753b397-3b41-4fa7-8f7f-65db44a90b06): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 14 19:37:07 no-preload-731976 kubelet[1323]: E0314 19:37:07.771135    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:37:18 no-preload-731976 kubelet[1323]: E0314 19:37:18.747561    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:37:31 no-preload-731976 kubelet[1323]: E0314 19:37:31.747639    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:37:44 no-preload-731976 kubelet[1323]: E0314 19:37:44.746698    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:37:57 no-preload-731976 kubelet[1323]: E0314 19:37:57.787285    1323 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:37:57 no-preload-731976 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:37:57 no-preload-731976 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:37:57 no-preload-731976 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:37:57 no-preload-731976 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:37:58 no-preload-731976 kubelet[1323]: E0314 19:37:58.747307    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:38:09 no-preload-731976 kubelet[1323]: E0314 19:38:09.746232    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:38:24 no-preload-731976 kubelet[1323]: E0314 19:38:24.747961    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:38:35 no-preload-731976 kubelet[1323]: E0314 19:38:35.748273    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:38:48 no-preload-731976 kubelet[1323]: E0314 19:38:48.748435    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:38:57 no-preload-731976 kubelet[1323]: E0314 19:38:57.789855    1323 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:38:57 no-preload-731976 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:38:57 no-preload-731976 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:38:57 no-preload-731976 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:38:57 no-preload-731976 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:39:00 no-preload-731976 kubelet[1323]: E0314 19:39:00.747110    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:39:15 no-preload-731976 kubelet[1323]: E0314 19:39:15.747623    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:39:28 no-preload-731976 kubelet[1323]: E0314 19:39:28.747777    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	
	
	==> storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] <==
	I0314 19:26:02.268142       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0314 19:26:32.275453       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] <==
	I0314 19:26:33.073620       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 19:26:33.086947       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 19:26:33.087141       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 19:26:50.490890       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 19:26:50.491251       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-731976_648f2eb3-e49c-4ead-8b3b-df039241b979!
	I0314 19:26:50.491724       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f0b07f95-d600-4dbe-9530-da031d2a9224", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-731976_648f2eb3-e49c-4ead-8b3b-df039241b979 became leader
	I0314 19:26:50.592385       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-731976_648f2eb3-e49c-4ead-8b3b-df039241b979!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-731976 -n no-preload-731976
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-731976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-rhg5r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-731976 describe pod metrics-server-57f55c9bc5-rhg5r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-731976 describe pod metrics-server-57f55c9bc5-rhg5r: exit status 1 (76.238299ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rhg5r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-731976 describe pod metrics-server-57f55c9bc5-rhg5r: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/UserAppExistsAfterStop (544.28s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.19s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
E0314 19:32:14.527870  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 19:32:14.854458  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: client rate limiter Wait returned an error: context deadline exceeded
start_stop_delete_test.go:274: ***** TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341
start_stop_delete_test.go:274: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop: showing logs for failed pods as of 2024-03-14 19:39:38.405822777 +0000 UTC m=+5704.891681659
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-440341 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-440341 logs -n 25: (2.125263321s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-578974 sudo                            | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-578974                                 | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:14 UTC |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-097195                           | kubernetes-upgrade-097195    | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:15 UTC |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-993602 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | disable-driver-mounts-993602                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:18 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-731976             | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-992669            | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC | 14 Mar 24 19:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-440341  | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC | 14 Mar 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-968094        | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-731976                  | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-992669                 | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-968094             | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-440341       | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:21 UTC | 14 Mar 24 19:30 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:21:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:21:06.641191  992563 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:21:06.641325  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641335  992563 out.go:304] Setting ErrFile to fd 2...
	I0314 19:21:06.641339  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641562  992563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:21:06.642133  992563 out.go:298] Setting JSON to false
	I0314 19:21:06.643097  992563 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":97419,"bootTime":1710346648,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:21:06.643154  992563 start.go:139] virtualization: kvm guest
	I0314 19:21:06.645619  992563 out.go:177] * [default-k8s-diff-port-440341] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:21:06.646948  992563 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:21:06.646951  992563 notify.go:220] Checking for updates...
	I0314 19:21:06.648183  992563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:21:06.649479  992563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:21:06.650646  992563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:21:06.651793  992563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:21:06.652871  992563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:21:06.654306  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:21:06.654679  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.654715  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.669822  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0314 19:21:06.670226  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.670730  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.670752  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.671113  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.671298  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.671562  992563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:21:06.671894  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.671955  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.686096  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
	I0314 19:21:06.686486  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.686930  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.686950  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.687304  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.687516  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.719775  992563 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 19:21:06.721100  992563 start.go:297] selected driver: kvm2
	I0314 19:21:06.721112  992563 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.721237  992563 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:21:06.722206  992563 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.722303  992563 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:21:06.737068  992563 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:21:06.737396  992563 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:21:06.737423  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:21:06.737430  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:21:06.737470  992563 start.go:340] cluster config:
	{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.737562  992563 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.739290  992563 out.go:177] * Starting "default-k8s-diff-port-440341" primary control-plane node in "default-k8s-diff-port-440341" cluster
	I0314 19:21:06.456441  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:06.740612  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:21:06.740639  992563 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 19:21:06.740649  992563 cache.go:56] Caching tarball of preloaded images
	I0314 19:21:06.740716  992563 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:21:06.740727  992563 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 19:21:06.740828  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:21:06.741044  992563 start.go:360] acquireMachinesLock for default-k8s-diff-port-440341: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:21:09.528474  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:15.608487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:18.680465  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:24.760483  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:27.832487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:33.912460  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:36.984446  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:43.064437  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:46.136461  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:52.216505  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:55.288457  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:01.368528  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:04.440444  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:10.520511  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:13.592559  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:19.672501  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:22.744517  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:28.824450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:31.896452  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:37.976513  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:41.048520  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:47.128498  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:50.200540  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:56.280558  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:59.352482  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:05.432488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:08.504481  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:14.584488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:17.656515  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:23.736418  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:26.808447  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:32.888521  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:35.960649  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:42.040524  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:45.112450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:51.192455  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:54.264715  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:00.344497  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:03.416432  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:06.421344  992056 start.go:364] duration metric: took 4m13.372196869s to acquireMachinesLock for "embed-certs-992669"
	I0314 19:24:06.421482  992056 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:06.421491  992056 fix.go:54] fixHost starting: 
	I0314 19:24:06.421996  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:06.422035  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:06.437799  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0314 19:24:06.438270  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:06.438847  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:24:06.438870  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:06.439255  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:06.439520  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:06.439648  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:24:06.441355  992056 fix.go:112] recreateIfNeeded on embed-certs-992669: state=Stopped err=<nil>
	I0314 19:24:06.441396  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	W0314 19:24:06.441578  992056 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:06.443265  992056 out.go:177] * Restarting existing kvm2 VM for "embed-certs-992669" ...
	I0314 19:24:06.444639  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Start
	I0314 19:24:06.444811  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring networks are active...
	I0314 19:24:06.445562  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network default is active
	I0314 19:24:06.445907  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network mk-embed-certs-992669 is active
	I0314 19:24:06.446291  992056 main.go:141] libmachine: (embed-certs-992669) Getting domain xml...
	I0314 19:24:06.446865  992056 main.go:141] libmachine: (embed-certs-992669) Creating domain...
	I0314 19:24:07.655936  992056 main.go:141] libmachine: (embed-certs-992669) Waiting to get IP...
	I0314 19:24:07.657162  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.657691  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.657795  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.657671  993021 retry.go:31] will retry after 279.188222ms: waiting for machine to come up
	I0314 19:24:07.938384  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.938890  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.938914  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.938842  993021 retry.go:31] will retry after 362.619543ms: waiting for machine to come up
	I0314 19:24:06.418272  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:06.418393  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.418709  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:24:06.418745  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.419028  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:24:06.421200  991880 machine.go:97] duration metric: took 4m37.410478688s to provisionDockerMachine
	I0314 19:24:06.421248  991880 fix.go:56] duration metric: took 4m37.431639776s for fixHost
	I0314 19:24:06.421257  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 4m37.431664509s
	W0314 19:24:06.421278  991880 start.go:713] error starting host: provision: host is not running
	W0314 19:24:06.421471  991880 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 19:24:06.421480  991880 start.go:728] Will try again in 5 seconds ...
	I0314 19:24:08.303564  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.304022  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.304049  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.303988  993021 retry.go:31] will retry after 299.406141ms: waiting for machine to come up
	I0314 19:24:08.605486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.605955  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.605983  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.605903  993021 retry.go:31] will retry after 438.174832ms: waiting for machine to come up
	I0314 19:24:09.045423  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.045943  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.045985  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.045874  993021 retry.go:31] will retry after 484.342881ms: waiting for machine to come up
	I0314 19:24:09.531525  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.531992  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.532032  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.531943  993021 retry.go:31] will retry after 680.030854ms: waiting for machine to come up
	I0314 19:24:10.213303  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:10.213760  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:10.213787  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:10.213714  993021 retry.go:31] will retry after 1.051377672s: waiting for machine to come up
	I0314 19:24:11.267112  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:11.267711  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:11.267736  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:11.267647  993021 retry.go:31] will retry after 1.45882013s: waiting for machine to come up
	I0314 19:24:12.729033  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:12.729529  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:12.729565  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:12.729476  993021 retry.go:31] will retry after 1.6586819s: waiting for machine to come up
	I0314 19:24:11.423018  991880 start.go:360] acquireMachinesLock for no-preload-731976: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:24:14.390304  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:14.390783  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:14.390813  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:14.390731  993021 retry.go:31] will retry after 1.484880543s: waiting for machine to come up
	I0314 19:24:15.877389  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:15.877877  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:15.877907  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:15.877817  993021 retry.go:31] will retry after 2.524223695s: waiting for machine to come up
	I0314 19:24:18.405110  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:18.405486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:18.405517  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:18.405433  993021 retry.go:31] will retry after 3.354970224s: waiting for machine to come up
	I0314 19:24:21.761886  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:21.762325  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:21.762374  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:21.762285  993021 retry.go:31] will retry after 3.996500899s: waiting for machine to come up
	I0314 19:24:27.129245  992344 start.go:364] duration metric: took 3m53.310661355s to acquireMachinesLock for "old-k8s-version-968094"
	I0314 19:24:27.129312  992344 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:27.129324  992344 fix.go:54] fixHost starting: 
	I0314 19:24:27.129726  992344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:27.129761  992344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:27.150444  992344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0314 19:24:27.150921  992344 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:27.151429  992344 main.go:141] libmachine: Using API Version  1
	I0314 19:24:27.151453  992344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:27.151859  992344 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:27.152058  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:27.152265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetState
	I0314 19:24:27.153847  992344 fix.go:112] recreateIfNeeded on old-k8s-version-968094: state=Stopped err=<nil>
	I0314 19:24:27.153876  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	W0314 19:24:27.154051  992344 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:27.156243  992344 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-968094" ...
	I0314 19:24:25.763430  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.763935  992056 main.go:141] libmachine: (embed-certs-992669) Found IP for machine: 192.168.50.213
	I0314 19:24:25.763962  992056 main.go:141] libmachine: (embed-certs-992669) Reserving static IP address...
	I0314 19:24:25.763974  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has current primary IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.764419  992056 main.go:141] libmachine: (embed-certs-992669) Reserved static IP address: 192.168.50.213
	I0314 19:24:25.764444  992056 main.go:141] libmachine: (embed-certs-992669) Waiting for SSH to be available...
	I0314 19:24:25.764467  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.764546  992056 main.go:141] libmachine: (embed-certs-992669) DBG | skip adding static IP to network mk-embed-certs-992669 - found existing host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"}
	I0314 19:24:25.764568  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Getting to WaitForSSH function...
	I0314 19:24:25.766675  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767018  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.767048  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767190  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH client type: external
	I0314 19:24:25.767237  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa (-rw-------)
	I0314 19:24:25.767278  992056 main.go:141] libmachine: (embed-certs-992669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:25.767299  992056 main.go:141] libmachine: (embed-certs-992669) DBG | About to run SSH command:
	I0314 19:24:25.767312  992056 main.go:141] libmachine: (embed-certs-992669) DBG | exit 0
	I0314 19:24:25.892385  992056 main.go:141] libmachine: (embed-certs-992669) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:25.892837  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetConfigRaw
	I0314 19:24:25.893525  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:25.895998  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896372  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.896411  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896708  992056 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/config.json ...
	I0314 19:24:25.896897  992056 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:25.896917  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:25.897155  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:25.899572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899856  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.899882  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:25.900241  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900594  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:25.900763  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:25.901166  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:25.901185  992056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:26.013286  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:26.013326  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013609  992056 buildroot.go:166] provisioning hostname "embed-certs-992669"
	I0314 19:24:26.013640  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013843  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.016614  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017006  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.017041  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.017397  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017596  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017746  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.017903  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.018131  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.018152  992056 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-992669 && echo "embed-certs-992669" | sudo tee /etc/hostname
	I0314 19:24:26.143977  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-992669
	
	I0314 19:24:26.144009  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.146661  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147021  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.147052  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147182  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.147387  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147542  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147677  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.147856  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.148037  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.148053  992056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-992669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-992669/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-992669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:26.266363  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:26.266400  992056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:26.266421  992056 buildroot.go:174] setting up certificates
	I0314 19:24:26.266430  992056 provision.go:84] configureAuth start
	I0314 19:24:26.266439  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.266755  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:26.269450  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269803  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.269833  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.272179  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272519  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.272572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272709  992056 provision.go:143] copyHostCerts
	I0314 19:24:26.272812  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:26.272823  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:26.272892  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:26.272992  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:26.273007  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:26.273034  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:26.273086  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:26.273093  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:26.273113  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:26.273199  992056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.embed-certs-992669 san=[127.0.0.1 192.168.50.213 embed-certs-992669 localhost minikube]
	I0314 19:24:26.424098  992056 provision.go:177] copyRemoteCerts
	I0314 19:24:26.424165  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:26.424193  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.426870  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427216  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.427293  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427367  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.427559  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.427745  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.427889  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.514935  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:26.542295  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0314 19:24:26.568557  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:24:26.595238  992056 provision.go:87] duration metric: took 328.794871ms to configureAuth
	I0314 19:24:26.595266  992056 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:26.595465  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:24:26.595587  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.598447  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598776  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.598810  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598958  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.599149  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599341  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599446  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.599576  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.599763  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.599784  992056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:26.883323  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:26.883363  992056 machine.go:97] duration metric: took 986.450882ms to provisionDockerMachine
	I0314 19:24:26.883378  992056 start.go:293] postStartSetup for "embed-certs-992669" (driver="kvm2")
	I0314 19:24:26.883393  992056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:26.883425  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:26.883799  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:26.883840  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.886707  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887088  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.887121  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887271  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.887471  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.887685  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.887842  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.972276  992056 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:26.977397  992056 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:26.977452  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:26.977557  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:26.977660  992056 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:26.977771  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:26.989997  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:27.015656  992056 start.go:296] duration metric: took 132.26294ms for postStartSetup
	I0314 19:24:27.015701  992056 fix.go:56] duration metric: took 20.594210437s for fixHost
	I0314 19:24:27.015723  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.018428  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018779  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.018820  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018934  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.019141  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019322  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019477  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.019663  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:27.019904  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:27.019918  992056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:27.129041  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444267.099898940
	
	I0314 19:24:27.129062  992056 fix.go:216] guest clock: 1710444267.099898940
	I0314 19:24:27.129070  992056 fix.go:229] Guest: 2024-03-14 19:24:27.09989894 +0000 UTC Remote: 2024-03-14 19:24:27.015704928 +0000 UTC m=+274.119026995 (delta=84.194012ms)
	I0314 19:24:27.129129  992056 fix.go:200] guest clock delta is within tolerance: 84.194012ms
	I0314 19:24:27.129134  992056 start.go:83] releasing machines lock for "embed-certs-992669", held for 20.707742604s
	I0314 19:24:27.129165  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.129445  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:27.132300  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132666  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.132696  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132891  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133513  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133729  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133832  992056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:27.133885  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.133989  992056 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:27.134020  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.136789  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137077  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137149  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137173  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137340  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.137694  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.137732  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137870  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.138017  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.138177  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.138423  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.241866  992056 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:27.248597  992056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:27.398034  992056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:27.404793  992056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:27.404866  992056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:27.425321  992056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:27.425347  992056 start.go:494] detecting cgroup driver to use...
	I0314 19:24:27.425441  992056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:27.446847  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:27.463193  992056 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:27.463248  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:27.477995  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:27.494158  992056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:27.626812  992056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:27.788432  992056 docker.go:233] disabling docker service ...
	I0314 19:24:27.788504  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:27.805552  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:27.820563  992056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:27.961941  992056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:28.083364  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:28.099491  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:28.121026  992056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:24:28.121100  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.133361  992056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:28.133445  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.145489  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.158112  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.171221  992056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:28.184604  992056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:28.196001  992056 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:28.196052  992056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:28.212800  992056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:28.225099  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:28.353741  992056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:28.497018  992056 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:28.497123  992056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:28.502406  992056 start.go:562] Will wait 60s for crictl version
	I0314 19:24:28.502464  992056 ssh_runner.go:195] Run: which crictl
	I0314 19:24:28.506848  992056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:28.546552  992056 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:28.546640  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.580646  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.613244  992056 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:24:27.157735  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .Start
	I0314 19:24:27.157923  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring networks are active...
	I0314 19:24:27.158602  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network default is active
	I0314 19:24:27.158940  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network mk-old-k8s-version-968094 is active
	I0314 19:24:27.159464  992344 main.go:141] libmachine: (old-k8s-version-968094) Getting domain xml...
	I0314 19:24:27.160230  992344 main.go:141] libmachine: (old-k8s-version-968094) Creating domain...
	I0314 19:24:28.397890  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting to get IP...
	I0314 19:24:28.398964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.399389  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.399455  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.399366  993151 retry.go:31] will retry after 254.808358ms: waiting for machine to come up
	I0314 19:24:28.655922  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.656383  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.656414  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.656329  993151 retry.go:31] will retry after 305.278558ms: waiting for machine to come up
	I0314 19:24:28.614866  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:28.618114  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618550  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:28.618595  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618875  992056 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:28.623905  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:28.637729  992056 kubeadm.go:877] updating cluster {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:28.637900  992056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:24:28.637976  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:28.679943  992056 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:24:28.680020  992056 ssh_runner.go:195] Run: which lz4
	I0314 19:24:28.684879  992056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:28.689966  992056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:28.690002  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:24:30.647436  992056 crio.go:444] duration metric: took 1.962590984s to copy over tarball
	I0314 19:24:30.647522  992056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:28.963796  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.964329  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.964360  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.964283  993151 retry.go:31] will retry after 405.241077ms: waiting for machine to come up
	I0314 19:24:29.371107  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.371677  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.371724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.371634  993151 retry.go:31] will retry after 392.618577ms: waiting for machine to come up
	I0314 19:24:29.766406  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.766893  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.766916  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.766848  993151 retry.go:31] will retry after 540.221203ms: waiting for machine to come up
	I0314 19:24:30.308703  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:30.309134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:30.309165  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:30.309075  993151 retry.go:31] will retry after 919.467685ms: waiting for machine to come up
	I0314 19:24:31.230536  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:31.231022  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:31.231055  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:31.230955  993151 retry.go:31] will retry after 1.096403831s: waiting for machine to come up
	I0314 19:24:32.329625  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:32.330123  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:32.330150  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:32.330079  993151 retry.go:31] will retry after 959.221478ms: waiting for machine to come up
	I0314 19:24:33.291448  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:33.291863  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:33.291896  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:33.291811  993151 retry.go:31] will retry after 1.719262878s: waiting for machine to come up
	I0314 19:24:33.418411  992056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.770860454s)
	I0314 19:24:33.418444  992056 crio.go:451] duration metric: took 2.770963996s to extract the tarball
	I0314 19:24:33.418458  992056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:33.461358  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:33.512360  992056 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:24:33.512392  992056 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:24:33.512403  992056 kubeadm.go:928] updating node { 192.168.50.213 8443 v1.28.4 crio true true} ...
	I0314 19:24:33.512647  992056 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-992669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:33.512740  992056 ssh_runner.go:195] Run: crio config
	I0314 19:24:33.572013  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:33.572042  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:33.572058  992056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:33.572089  992056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-992669 NodeName:embed-certs-992669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:24:33.572310  992056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-992669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:33.572391  992056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:24:33.583442  992056 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:33.583514  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:33.593833  992056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0314 19:24:33.611517  992056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:33.630287  992056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0314 19:24:33.649961  992056 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:33.654803  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:33.669018  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:33.787097  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:33.806023  992056 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669 for IP: 192.168.50.213
	I0314 19:24:33.806049  992056 certs.go:194] generating shared ca certs ...
	I0314 19:24:33.806076  992056 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:33.806256  992056 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:33.806310  992056 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:33.806325  992056 certs.go:256] generating profile certs ...
	I0314 19:24:33.806434  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/client.key
	I0314 19:24:33.806536  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key.c0728cf7
	I0314 19:24:33.806597  992056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key
	I0314 19:24:33.806759  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:33.806801  992056 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:33.806815  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:33.806850  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:33.806890  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:33.806919  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:33.806982  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:33.807845  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:33.856253  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:33.912784  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:33.954957  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:33.993293  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 19:24:34.037089  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0314 19:24:34.064883  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:34.091958  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:34.118801  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:24:34.145200  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:24:34.177627  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:34.205768  992056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:24:34.228516  992056 ssh_runner.go:195] Run: openssl version
	I0314 19:24:34.236753  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:24:34.251464  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257801  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257854  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.264945  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:24:34.277068  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:24:34.289085  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294602  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294670  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.301147  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:24:34.313131  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:24:34.324658  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329681  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329741  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.336033  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:24:34.347545  992056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:24:34.352395  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:24:34.358770  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:24:34.364979  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:24:34.371983  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:24:34.378320  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:24:34.385155  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:24:34.392023  992056 kubeadm.go:391] StartCluster: {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:24:34.392123  992056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:24:34.392163  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.431071  992056 cri.go:89] found id: ""
	I0314 19:24:34.431146  992056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:24:34.442517  992056 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:24:34.442537  992056 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:24:34.442543  992056 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:24:34.442591  992056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:24:34.452897  992056 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:24:34.453878  992056 kubeconfig.go:125] found "embed-certs-992669" server: "https://192.168.50.213:8443"
	I0314 19:24:34.456056  992056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:24:34.466222  992056 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.213
	I0314 19:24:34.466280  992056 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:24:34.466297  992056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:24:34.466350  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.514040  992056 cri.go:89] found id: ""
	I0314 19:24:34.514150  992056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:24:34.532904  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:24:34.543553  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:24:34.543572  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:24:34.543621  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:24:34.553476  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:24:34.553537  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:24:34.564032  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:24:34.573782  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:24:34.573880  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:24:34.584510  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.595906  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:24:34.595970  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.610866  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:24:34.623752  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:24:34.623808  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:24:34.634364  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:24:34.645735  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:34.774124  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.518494  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.777109  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.873101  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.991242  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:24:35.991340  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.491712  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.991589  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:37.035324  992056 api_server.go:72] duration metric: took 1.044079871s to wait for apiserver process to appear ...
	I0314 19:24:37.035360  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:24:37.035414  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:37.036045  992056 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0314 19:24:37.535727  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:35.013374  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:35.013750  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:35.013781  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:35.013702  993151 retry.go:31] will retry after 1.413824554s: waiting for machine to come up
	I0314 19:24:36.429118  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:36.429704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:36.429738  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:36.429643  993151 retry.go:31] will retry after 2.349477476s: waiting for machine to come up
	I0314 19:24:40.106309  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.106348  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.106381  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.155310  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.155352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.535833  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.544840  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:40.544869  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.036483  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.049323  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:41.049352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.536465  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.542411  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:24:41.550034  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:24:41.550066  992056 api_server.go:131] duration metric: took 4.514697227s to wait for apiserver health ...
	I0314 19:24:41.550078  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:41.550086  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:41.551967  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:24:41.553380  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:24:41.564892  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:24:41.585838  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:24:41.600993  992056 system_pods.go:59] 8 kube-system pods found
	I0314 19:24:41.601025  992056 system_pods.go:61] "coredns-5dd5756b68-jpsr6" [80728635-786f-442e-80be-811e3292128b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:24:41.601037  992056 system_pods.go:61] "etcd-embed-certs-992669" [4bd7ff48-fe02-4b55-b1f5-cf195efae581] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:24:41.601043  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [2a5f81e9-4943-47d9-a705-e91b802bd506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:24:41.601052  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [50904b48-cbc6-494c-8ed0-ef558f20513c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:24:41.601057  992056 system_pods.go:61] "kube-proxy-nsgs6" [d26d8d3f-04ca-4f68-9016-48552bcdc2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 19:24:41.601062  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [bf535a02-78be-44b0-8ebb-338754867930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:24:41.601067  992056 system_pods.go:61] "metrics-server-57f55c9bc5-w8cj6" [398e104c-24c4-45db-94fb-44188cfa85a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:24:41.601071  992056 system_pods.go:61] "storage-provisioner" [66abcc06-9867-4617-afc1-3fa370940f80] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:24:41.601077  992056 system_pods.go:74] duration metric: took 15.215121ms to wait for pod list to return data ...
	I0314 19:24:41.601085  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:24:41.606110  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:24:41.606135  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:24:41.606146  992056 node_conditions.go:105] duration metric: took 5.056699ms to run NodePressure ...
	I0314 19:24:41.606163  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:41.842508  992056 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850325  992056 kubeadm.go:733] kubelet initialised
	I0314 19:24:41.850344  992056 kubeadm.go:734] duration metric: took 7.804586ms waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850352  992056 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:24:41.857067  992056 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.861933  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861954  992056 pod_ready.go:81] duration metric: took 4.862588ms for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.861963  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861971  992056 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.869015  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869044  992056 pod_ready.go:81] duration metric: took 7.059854ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.869055  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869063  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.877475  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877499  992056 pod_ready.go:81] duration metric: took 8.426466ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.877517  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877525  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.989806  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989836  992056 pod_ready.go:81] duration metric: took 112.302268ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.989846  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989852  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390883  992056 pod_ready.go:92] pod "kube-proxy-nsgs6" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:42.390916  992056 pod_ready.go:81] duration metric: took 401.05393ms for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390929  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:38.781555  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:38.782105  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:38.782134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:38.782060  993151 retry.go:31] will retry after 3.062702235s: waiting for machine to come up
	I0314 19:24:41.846373  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:41.846889  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:41.846928  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:41.846822  993151 retry.go:31] will retry after 3.245094913s: waiting for machine to come up
	I0314 19:24:44.397857  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:46.400091  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:45.093425  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:45.093821  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:45.093848  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:45.093766  993151 retry.go:31] will retry after 4.695140566s: waiting for machine to come up
	I0314 19:24:51.181742  992563 start.go:364] duration metric: took 3m44.440656871s to acquireMachinesLock for "default-k8s-diff-port-440341"
	I0314 19:24:51.181827  992563 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:51.181839  992563 fix.go:54] fixHost starting: 
	I0314 19:24:51.182279  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:51.182325  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:51.202636  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0314 19:24:51.203153  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:51.203703  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:24:51.203732  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:51.204197  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:51.204404  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:24:51.204622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:24:51.206147  992563 fix.go:112] recreateIfNeeded on default-k8s-diff-port-440341: state=Stopped err=<nil>
	I0314 19:24:51.206184  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	W0314 19:24:51.206365  992563 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:51.208359  992563 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-440341" ...
	I0314 19:24:51.209719  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Start
	I0314 19:24:51.209912  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring networks are active...
	I0314 19:24:51.210618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network default is active
	I0314 19:24:51.210996  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network mk-default-k8s-diff-port-440341 is active
	I0314 19:24:51.211386  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Getting domain xml...
	I0314 19:24:51.212126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Creating domain...
	I0314 19:24:49.791977  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792478  992344 main.go:141] libmachine: (old-k8s-version-968094) Found IP for machine: 192.168.72.211
	I0314 19:24:49.792509  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has current primary IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792519  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserving static IP address...
	I0314 19:24:49.792964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.792995  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserved static IP address: 192.168.72.211
	I0314 19:24:49.793028  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | skip adding static IP to network mk-old-k8s-version-968094 - found existing host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"}
	I0314 19:24:49.793049  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Getting to WaitForSSH function...
	I0314 19:24:49.793060  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting for SSH to be available...
	I0314 19:24:49.795809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796119  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.796155  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796340  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH client type: external
	I0314 19:24:49.796365  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa (-rw-------)
	I0314 19:24:49.796399  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:49.796418  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | About to run SSH command:
	I0314 19:24:49.796437  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | exit 0
	I0314 19:24:49.928364  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:49.928849  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:24:49.929565  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:49.932065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932543  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.932575  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932818  992344 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json ...
	I0314 19:24:49.933027  992344 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:49.933049  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:49.933280  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:49.935870  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936260  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.936292  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936447  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:49.936649  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936821  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936940  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:49.937112  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:49.937318  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:49.937331  992344 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:50.053144  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:50.053184  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053461  992344 buildroot.go:166] provisioning hostname "old-k8s-version-968094"
	I0314 19:24:50.053495  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053715  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.056663  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057034  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.057061  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.057486  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057647  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057775  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.057990  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.058167  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.058181  992344 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-968094 && echo "old-k8s-version-968094" | sudo tee /etc/hostname
	I0314 19:24:50.190002  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-968094
	
	I0314 19:24:50.190030  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.192892  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193306  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.193343  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.193825  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194002  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194128  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.194298  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.194472  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.194493  992344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-968094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-968094/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-968094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:50.322939  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:50.322975  992344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:50.323003  992344 buildroot.go:174] setting up certificates
	I0314 19:24:50.323016  992344 provision.go:84] configureAuth start
	I0314 19:24:50.323026  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.323344  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:50.326376  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.326798  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.326827  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.327082  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.329704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.329994  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.330026  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.330131  992344 provision.go:143] copyHostCerts
	I0314 19:24:50.330206  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:50.330223  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:50.330299  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:50.330426  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:50.330435  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:50.330472  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:50.330549  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:50.330560  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:50.330584  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:50.330649  992344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-968094 san=[127.0.0.1 192.168.72.211 localhost minikube old-k8s-version-968094]
	I0314 19:24:50.471374  992344 provision.go:177] copyRemoteCerts
	I0314 19:24:50.471438  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:50.471469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.474223  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474570  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.474608  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474773  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.474969  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.475149  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.475261  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:50.563859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:24:50.593259  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:50.624146  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 19:24:50.651113  992344 provision.go:87] duration metric: took 328.081801ms to configureAuth
	I0314 19:24:50.651158  992344 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:50.651348  992344 config.go:182] Loaded profile config "old-k8s-version-968094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:24:50.651445  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.654716  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.655096  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655328  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.655552  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655730  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655870  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.656012  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.656191  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.656223  992344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:50.925456  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:50.925492  992344 machine.go:97] duration metric: took 992.449429ms to provisionDockerMachine
	I0314 19:24:50.925508  992344 start.go:293] postStartSetup for "old-k8s-version-968094" (driver="kvm2")
	I0314 19:24:50.925518  992344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:50.925535  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:50.925909  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:50.925957  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.928724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929100  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.929124  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929292  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.929469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.929606  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.929718  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.020664  992344 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:51.025418  992344 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:51.025449  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:51.025530  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:51.025642  992344 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:51.025732  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:51.036808  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:51.062597  992344 start.go:296] duration metric: took 137.076655ms for postStartSetup
	I0314 19:24:51.062641  992344 fix.go:56] duration metric: took 23.933315476s for fixHost
	I0314 19:24:51.062667  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.065408  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.065766  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.065809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.066008  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.066241  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066426  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.066751  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:51.066923  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:51.066934  992344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:51.181564  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444291.127685902
	
	I0314 19:24:51.181593  992344 fix.go:216] guest clock: 1710444291.127685902
	I0314 19:24:51.181604  992344 fix.go:229] Guest: 2024-03-14 19:24:51.127685902 +0000 UTC Remote: 2024-03-14 19:24:51.062645814 +0000 UTC m=+257.398231189 (delta=65.040088ms)
	I0314 19:24:51.181630  992344 fix.go:200] guest clock delta is within tolerance: 65.040088ms
	I0314 19:24:51.181636  992344 start.go:83] releasing machines lock for "old-k8s-version-968094", held for 24.052354261s
	I0314 19:24:51.181662  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.181979  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:51.185086  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185444  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.185482  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185683  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186150  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186369  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186475  992344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:51.186530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.186600  992344 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:51.186628  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.189328  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189665  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189739  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.189769  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189909  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190069  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.190091  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.190096  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190278  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190372  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190419  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.190530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190693  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190870  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.273691  992344 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:51.304581  992344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:51.462596  992344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:51.469505  992344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:51.469580  992344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:51.488042  992344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:51.488064  992344 start.go:494] detecting cgroup driver to use...
	I0314 19:24:51.488127  992344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:51.506331  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:51.521263  992344 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:51.521310  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:51.535346  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:51.554784  992344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:51.695072  992344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:51.861752  992344 docker.go:233] disabling docker service ...
	I0314 19:24:51.861822  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:51.886279  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:51.908899  992344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:52.059911  992344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:52.216861  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:52.236554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:52.262549  992344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 19:24:52.262629  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.277311  992344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:52.277405  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.292485  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.307327  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.323517  992344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:52.337431  992344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:52.350647  992344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:52.350744  992344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:52.371679  992344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:52.384810  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:52.540285  992344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:52.710717  992344 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:52.710812  992344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:52.716025  992344 start.go:562] Will wait 60s for crictl version
	I0314 19:24:52.716079  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:52.720670  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:52.760376  992344 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:52.760453  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.795912  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.829365  992344 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 19:24:48.899626  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:50.899777  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:52.830745  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:52.834322  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.834813  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:52.834846  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.835148  992344 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:52.840664  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:52.855935  992344 kubeadm.go:877] updating cluster {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:52.856085  992344 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:24:52.856143  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:52.917316  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:52.917384  992344 ssh_runner.go:195] Run: which lz4
	I0314 19:24:52.923732  992344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:52.929018  992344 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:52.929045  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 19:24:52.555382  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting to get IP...
	I0314 19:24:52.556296  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556767  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556831  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.556746  993275 retry.go:31] will retry after 250.179074ms: waiting for machine to come up
	I0314 19:24:52.808339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.808989  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.809024  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.808935  993275 retry.go:31] will retry after 257.317639ms: waiting for machine to come up
	I0314 19:24:53.068134  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068762  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.068737  993275 retry.go:31] will retry after 427.477171ms: waiting for machine to come up
	I0314 19:24:53.498274  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498751  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498783  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.498710  993275 retry.go:31] will retry after 414.04038ms: waiting for machine to come up
	I0314 19:24:53.914418  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.914970  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.915003  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.914922  993275 retry.go:31] will retry after 698.808984ms: waiting for machine to come up
	I0314 19:24:54.616167  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616671  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616733  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:54.616625  993275 retry.go:31] will retry after 627.573493ms: waiting for machine to come up
	I0314 19:24:55.245579  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246152  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246193  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:55.246077  993275 retry.go:31] will retry after 827.444645ms: waiting for machine to come up
	I0314 19:24:56.075132  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075586  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:56.075577  993275 retry.go:31] will retry after 1.317575549s: waiting for machine to come up
	I0314 19:24:53.400660  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:55.906584  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:56.899301  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:56.899342  992056 pod_ready.go:81] duration metric: took 14.508394033s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:56.899353  992056 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:55.007168  992344 crio.go:444] duration metric: took 2.08347164s to copy over tarball
	I0314 19:24:55.007258  992344 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:58.484792  992344 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.477465904s)
	I0314 19:24:58.484844  992344 crio.go:451] duration metric: took 3.47764437s to extract the tarball
	I0314 19:24:58.484855  992344 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:58.531628  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:58.586436  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:58.586467  992344 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.586644  992344 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.586686  992344 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.586732  992344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.586795  992344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588701  992344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.588708  992344 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 19:24:58.588712  992344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.588743  992344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.588773  992344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.588717  992344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:57.395510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.395966  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.396012  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:57.395926  993275 retry.go:31] will retry after 1.349742787s: waiting for machine to come up
	I0314 19:24:58.747273  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747764  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:58.747711  993275 retry.go:31] will retry after 1.715984886s: waiting for machine to come up
	I0314 19:25:00.465630  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466197  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:00.466159  993275 retry.go:31] will retry after 2.291989797s: waiting for machine to come up
	I0314 19:24:58.949160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:01.407335  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:58.745061  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.748854  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.755753  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.757595  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.776672  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.785641  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.803868  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 19:24:58.878866  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:59.049142  992344 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 19:24:59.049192  992344 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 19:24:59.049238  992344 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 19:24:59.049245  992344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 19:24:59.049206  992344 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.049275  992344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.049297  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058360  992344 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 19:24:59.058394  992344 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.058429  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058471  992344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 19:24:59.058508  992344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.058530  992344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 19:24:59.058550  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058560  992344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.058580  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058506  992344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 19:24:59.058620  992344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.058668  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.179879  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 19:24:59.179903  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.179964  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.180018  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.180048  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.180057  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.180158  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.353654  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 19:24:59.353726  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 19:24:59.353834  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 19:24:59.353886  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 19:24:59.353951  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 19:24:59.353992  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 19:24:59.356778  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 19:24:59.356828  992344 cache_images.go:92] duration metric: took 770.342451ms to LoadCachedImages
	W0314 19:24:59.356913  992344 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0314 19:24:59.356940  992344 kubeadm.go:928] updating node { 192.168.72.211 8443 v1.20.0 crio true true} ...
	I0314 19:24:59.357079  992344 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-968094 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:59.357158  992344 ssh_runner.go:195] Run: crio config
	I0314 19:24:59.412340  992344 cni.go:84] Creating CNI manager for ""
	I0314 19:24:59.412369  992344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:59.412383  992344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:59.412401  992344 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-968094 NodeName:old-k8s-version-968094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 19:24:59.412538  992344 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-968094"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:59.412599  992344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 19:24:59.424508  992344 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:59.424568  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:59.435744  992344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0314 19:24:59.456291  992344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:59.476542  992344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0314 19:24:59.496114  992344 ssh_runner.go:195] Run: grep 192.168.72.211	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:59.500824  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:59.515178  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:59.658035  992344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:59.677735  992344 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094 for IP: 192.168.72.211
	I0314 19:24:59.677764  992344 certs.go:194] generating shared ca certs ...
	I0314 19:24:59.677788  992344 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:59.677986  992344 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:59.678055  992344 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:59.678073  992344 certs.go:256] generating profile certs ...
	I0314 19:24:59.678209  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key
	I0314 19:24:59.678288  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff
	I0314 19:24:59.678358  992344 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key
	I0314 19:24:59.678538  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:59.678589  992344 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:59.678602  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:59.678684  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:59.678751  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:59.678787  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:59.678858  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:59.679859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:59.720965  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:59.758643  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:59.791205  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:59.832034  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 19:24:59.864634  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:24:59.912167  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:59.941168  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:59.969896  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:59.998999  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:00.029688  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:00.062406  992344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:00.083876  992344 ssh_runner.go:195] Run: openssl version
	I0314 19:25:00.091083  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:00.104196  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110057  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110152  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.117863  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:00.130915  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:00.144184  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149849  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149905  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.156267  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:00.168884  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:00.181228  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186741  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186815  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.193408  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:00.206565  992344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:00.211955  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:00.218803  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:00.226004  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:00.233071  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:00.239998  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:00.246935  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:00.253650  992344 kubeadm.go:391] StartCluster: {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:00.253770  992344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:00.253810  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.296620  992344 cri.go:89] found id: ""
	I0314 19:25:00.296698  992344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:00.308438  992344 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:00.308468  992344 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:00.308474  992344 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:00.308525  992344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:00.319200  992344 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:00.320258  992344 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-968094" does not appear in /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:25:00.320949  992344 kubeconfig.go:62] /home/jenkins/minikube-integration/18384-942544/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-968094" cluster setting kubeconfig missing "old-k8s-version-968094" context setting]
	I0314 19:25:00.321954  992344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:00.323826  992344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:00.334959  992344 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.211
	I0314 19:25:00.334999  992344 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:00.335015  992344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:00.335094  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.382418  992344 cri.go:89] found id: ""
	I0314 19:25:00.382504  992344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:00.400714  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:00.411916  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:00.411941  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:00.412000  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:00.421737  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:00.421786  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:00.431760  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:00.441154  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:00.441196  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:00.450820  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.460234  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:00.460286  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.470870  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:00.480352  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:00.480410  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:00.490282  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:00.500774  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:00.627719  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.640607  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.012840431s)
	I0314 19:25:01.640641  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.916817  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.028420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.119081  992344 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:02.119190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.619675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.119328  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.620344  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.761090  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761739  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:02.761611  993275 retry.go:31] will retry after 3.350017146s: waiting for machine to come up
	I0314 19:25:06.113637  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114139  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:06.114067  993275 retry.go:31] will retry after 2.99017798s: waiting for machine to come up
	I0314 19:25:03.407892  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:05.907001  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:04.120088  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:04.619514  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.119530  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.119991  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.619382  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.119301  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.620072  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.119582  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.619828  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.105563  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106118  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106171  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:09.105987  993275 retry.go:31] will retry after 5.42931998s: waiting for machine to come up
	I0314 19:25:08.406736  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:10.906160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:09.119659  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.619483  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.119624  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.619745  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.120056  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.619647  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.120231  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.619400  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.120340  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.620046  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.061441  991880 start.go:364] duration metric: took 1m4.63836278s to acquireMachinesLock for "no-preload-731976"
	I0314 19:25:16.061504  991880 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:25:16.061513  991880 fix.go:54] fixHost starting: 
	I0314 19:25:16.061978  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:25:16.062021  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:25:16.079752  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0314 19:25:16.080283  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:25:16.080930  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:25:16.080964  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:25:16.081279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:25:16.081477  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:16.081630  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:25:16.083170  991880 fix.go:112] recreateIfNeeded on no-preload-731976: state=Stopped err=<nil>
	I0314 19:25:16.083196  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	W0314 19:25:16.083368  991880 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:25:16.085486  991880 out.go:177] * Restarting existing kvm2 VM for "no-preload-731976" ...
	I0314 19:25:14.539116  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has current primary IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Found IP for machine: 192.168.61.88
	I0314 19:25:14.539650  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserving static IP address...
	I0314 19:25:14.540057  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.540081  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserved static IP address: 192.168.61.88
	I0314 19:25:14.540105  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | skip adding static IP to network mk-default-k8s-diff-port-440341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"}
	I0314 19:25:14.540126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Getting to WaitForSSH function...
	I0314 19:25:14.540172  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for SSH to be available...
	I0314 19:25:14.542249  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542558  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.542594  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542722  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH client type: external
	I0314 19:25:14.542755  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa (-rw-------)
	I0314 19:25:14.542793  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:14.542810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | About to run SSH command:
	I0314 19:25:14.542841  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | exit 0
	I0314 19:25:14.668392  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:14.668820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetConfigRaw
	I0314 19:25:14.669583  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:14.672181  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672581  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.672622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672861  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:25:14.673049  992563 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:14.673069  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:14.673317  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.675826  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676173  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.676204  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.676547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676702  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.676969  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.677212  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.677229  992563 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:14.780979  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:14.781005  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781243  992563 buildroot.go:166] provisioning hostname "default-k8s-diff-port-440341"
	I0314 19:25:14.781272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781508  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.784454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.784868  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.784897  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.785044  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.785241  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785410  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.785731  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.786010  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.786038  992563 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-440341 && echo "default-k8s-diff-port-440341" | sudo tee /etc/hostname
	I0314 19:25:14.904629  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-440341
	
	I0314 19:25:14.904677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.907677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908043  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.908065  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908308  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.908510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908709  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908895  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.909075  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.909242  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.909260  992563 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-440341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-440341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-440341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:15.027592  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:15.027627  992563 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:15.027663  992563 buildroot.go:174] setting up certificates
	I0314 19:25:15.027676  992563 provision.go:84] configureAuth start
	I0314 19:25:15.027686  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:15.027992  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:15.031259  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031691  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.031723  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031839  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.034341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034690  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.034727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034882  992563 provision.go:143] copyHostCerts
	I0314 19:25:15.034957  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:15.034974  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:15.035032  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:15.035117  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:15.035126  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:15.035150  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:15.035219  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:15.035240  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:15.035276  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:15.035368  992563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-440341 san=[127.0.0.1 192.168.61.88 default-k8s-diff-port-440341 localhost minikube]
	I0314 19:25:15.366505  992563 provision.go:177] copyRemoteCerts
	I0314 19:25:15.366572  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:15.366601  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.369547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.369931  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.369968  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.370178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.370389  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.370559  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.370668  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.451879  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:15.479025  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 19:25:15.505498  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:15.531616  992563 provision.go:87] duration metric: took 503.926667ms to configureAuth
	I0314 19:25:15.531643  992563 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:15.531808  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:25:15.531887  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.534449  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534774  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.534805  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534957  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.535182  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535344  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535479  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.535660  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.535863  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.535895  992563 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:15.820304  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:15.820329  992563 machine.go:97] duration metric: took 1.147267075s to provisionDockerMachine
	I0314 19:25:15.820361  992563 start.go:293] postStartSetup for "default-k8s-diff-port-440341" (driver="kvm2")
	I0314 19:25:15.820373  992563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:15.820407  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:15.820799  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:15.820845  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.823575  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.823941  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.823987  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.824114  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.824357  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.824550  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.824671  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.908341  992563 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:15.913846  992563 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:15.913876  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:15.913955  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:15.914034  992563 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:15.914122  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:15.925105  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:15.954205  992563 start.go:296] duration metric: took 133.827027ms for postStartSetup
	I0314 19:25:15.954258  992563 fix.go:56] duration metric: took 24.772420326s for fixHost
	I0314 19:25:15.954282  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.957262  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957609  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.957635  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957844  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.958095  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.958685  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.958877  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.958890  992563 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:16.061284  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444316.007193080
	
	I0314 19:25:16.061311  992563 fix.go:216] guest clock: 1710444316.007193080
	I0314 19:25:16.061318  992563 fix.go:229] Guest: 2024-03-14 19:25:16.00719308 +0000 UTC Remote: 2024-03-14 19:25:15.954262263 +0000 UTC m=+249.360732976 (delta=52.930817ms)
	I0314 19:25:16.061337  992563 fix.go:200] guest clock delta is within tolerance: 52.930817ms
	I0314 19:25:16.061342  992563 start.go:83] releasing machines lock for "default-k8s-diff-port-440341", held for 24.879556185s
	I0314 19:25:16.061371  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.061696  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:16.064827  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065187  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.065222  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065419  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.065929  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066138  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066251  992563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:16.066313  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.066422  992563 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:16.066451  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.069082  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069202  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069488  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069518  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069624  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069659  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.069727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069881  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.069946  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.070091  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070106  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.070265  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.070283  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070420  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.149620  992563 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:16.178081  992563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:16.329236  992563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:16.337073  992563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:16.337165  992563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:16.364829  992563 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:16.364860  992563 start.go:494] detecting cgroup driver to use...
	I0314 19:25:16.364950  992563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:16.381277  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:16.396677  992563 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:16.396790  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:16.415438  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:16.434001  992563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:16.557750  992563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:16.705623  992563 docker.go:233] disabling docker service ...
	I0314 19:25:16.705722  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:16.724795  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:16.740336  992563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:16.886850  992563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:17.053349  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:17.069592  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:17.094552  992563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:17.094625  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.110947  992563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:17.111007  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.126320  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.146601  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.159826  992563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:17.173155  992563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:17.184494  992563 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:17.184558  992563 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:17.208695  992563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:17.227381  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:17.368355  992563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:17.520886  992563 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:17.520974  992563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:17.526580  992563 start.go:562] Will wait 60s for crictl version
	I0314 19:25:17.526628  992563 ssh_runner.go:195] Run: which crictl
	I0314 19:25:17.531219  992563 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:17.575983  992563 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:17.576094  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.609997  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.649005  992563 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:25:13.406397  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:15.407636  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:17.409791  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:14.119937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:14.619997  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.120018  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.620272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.119409  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.619421  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.120049  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.619392  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.120272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.619832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.086761  991880 main.go:141] libmachine: (no-preload-731976) Calling .Start
	I0314 19:25:16.086939  991880 main.go:141] libmachine: (no-preload-731976) Ensuring networks are active...
	I0314 19:25:16.087657  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network default is active
	I0314 19:25:16.088038  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network mk-no-preload-731976 is active
	I0314 19:25:16.088466  991880 main.go:141] libmachine: (no-preload-731976) Getting domain xml...
	I0314 19:25:16.089244  991880 main.go:141] libmachine: (no-preload-731976) Creating domain...
	I0314 19:25:17.372280  991880 main.go:141] libmachine: (no-preload-731976) Waiting to get IP...
	I0314 19:25:17.373197  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.373612  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.373682  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.373595  993471 retry.go:31] will retry after 247.546207ms: waiting for machine to come up
	I0314 19:25:17.622973  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.623491  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.623521  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.623426  993471 retry.go:31] will retry after 340.11253ms: waiting for machine to come up
	I0314 19:25:17.964912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.965367  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.965409  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.965326  993471 retry.go:31] will retry after 467.934923ms: waiting for machine to come up
	I0314 19:25:18.434872  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.435488  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.435532  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.435428  993471 retry.go:31] will retry after 407.906998ms: waiting for machine to come up
	I0314 19:25:18.845093  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.845593  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.845624  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.845538  993471 retry.go:31] will retry after 461.594471ms: waiting for machine to come up
	I0314 19:25:17.650252  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:17.653280  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:17.653706  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653907  992563 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:17.660311  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:17.676122  992563 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:17.676277  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:25:17.676348  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:17.718920  992563 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:25:17.718999  992563 ssh_runner.go:195] Run: which lz4
	I0314 19:25:17.724064  992563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:25:17.729236  992563 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:25:17.729268  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:25:19.779405  992563 crio.go:444] duration metric: took 2.055391829s to copy over tarball
	I0314 19:25:19.779494  992563 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:25:19.411247  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:21.911525  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:19.120147  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.619419  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.119333  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.620029  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.119402  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.620236  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.119692  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.120125  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.620104  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.309335  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.309861  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.309892  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.309812  993471 retry.go:31] will retry after 629.96532ms: waiting for machine to come up
	I0314 19:25:19.941554  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.942052  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.942086  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.942018  993471 retry.go:31] will retry after 1.025753706s: waiting for machine to come up
	I0314 19:25:20.969178  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:20.969734  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:20.969775  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:20.969671  993471 retry.go:31] will retry after 1.02702661s: waiting for machine to come up
	I0314 19:25:21.998485  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:21.999019  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:21.999054  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:21.998955  993471 retry.go:31] will retry after 1.463514327s: waiting for machine to come up
	I0314 19:25:23.464556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:23.465087  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:23.465123  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:23.465035  993471 retry.go:31] will retry after 2.155372334s: waiting for machine to come up
	I0314 19:25:22.861284  992563 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081750952s)
	I0314 19:25:22.861324  992563 crio.go:451] duration metric: took 3.081885026s to extract the tarball
	I0314 19:25:22.861335  992563 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:25:22.907763  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:22.962568  992563 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:25:22.962593  992563 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:25:22.962602  992563 kubeadm.go:928] updating node { 192.168.61.88 8444 v1.28.4 crio true true} ...
	I0314 19:25:22.962756  992563 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-440341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:22.962851  992563 ssh_runner.go:195] Run: crio config
	I0314 19:25:23.020057  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:23.020092  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:23.020109  992563 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:23.020150  992563 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.88 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-440341 NodeName:default-k8s-diff-port-440341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:23.020354  992563 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.88
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-440341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:23.020441  992563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:25:23.031259  992563 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:23.031351  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:23.041703  992563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0314 19:25:23.061055  992563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:25:23.084905  992563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0314 19:25:23.108282  992563 ssh_runner.go:195] Run: grep 192.168.61.88	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:23.114097  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:23.134147  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:23.261318  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:23.280454  992563 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341 for IP: 192.168.61.88
	I0314 19:25:23.280483  992563 certs.go:194] generating shared ca certs ...
	I0314 19:25:23.280506  992563 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:23.280675  992563 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:23.280739  992563 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:23.280753  992563 certs.go:256] generating profile certs ...
	I0314 19:25:23.280872  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.key
	I0314 19:25:23.280971  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key.a3c32cf7
	I0314 19:25:23.281038  992563 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key
	I0314 19:25:23.281177  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:23.281219  992563 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:23.281232  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:23.281268  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:23.281300  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:23.281333  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:23.281389  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:23.282304  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:23.351284  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:23.402835  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:23.435934  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:23.467188  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 19:25:23.499760  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:23.528544  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:23.556740  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:23.584404  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:23.615693  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:23.643349  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:23.671793  992563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:23.692766  992563 ssh_runner.go:195] Run: openssl version
	I0314 19:25:23.699459  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:23.711735  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717022  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717078  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.723658  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:23.735141  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:23.746833  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753783  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753855  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.760817  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:23.772826  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:23.784241  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789107  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789170  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.795406  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:23.806969  992563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:23.811875  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:23.818337  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:23.826885  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:23.835278  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:23.843419  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:23.851515  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:23.860074  992563 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:23.860169  992563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:23.860241  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.902985  992563 cri.go:89] found id: ""
	I0314 19:25:23.903065  992563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:23.915686  992563 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:23.915711  992563 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:23.915718  992563 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:23.915776  992563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:23.926246  992563 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:23.927336  992563 kubeconfig.go:125] found "default-k8s-diff-port-440341" server: "https://192.168.61.88:8444"
	I0314 19:25:23.929693  992563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:23.940022  992563 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.88
	I0314 19:25:23.940053  992563 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:23.940067  992563 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:23.940135  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.982828  992563 cri.go:89] found id: ""
	I0314 19:25:23.982911  992563 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:24.001146  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:24.014973  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:24.015016  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:24.015069  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:25:24.024883  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:24.024954  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:24.034932  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:25:24.044680  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:24.044737  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:24.054865  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.064375  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:24.064440  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.075503  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:25:24.085139  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:24.085181  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:24.096092  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:24.106907  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.238605  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.990111  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.246192  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.325019  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.458340  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:25.458512  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.959178  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.459441  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.543678  992563 api_server.go:72] duration metric: took 1.085336822s to wait for apiserver process to appear ...
	I0314 19:25:26.543708  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:26.543734  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:26.544332  992563 api_server.go:269] stopped: https://192.168.61.88:8444/healthz: Get "https://192.168.61.88:8444/healthz": dial tcp 192.168.61.88:8444: connect: connection refused
	I0314 19:25:24.407953  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:26.408497  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:24.119417  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:24.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.120173  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.619362  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.119366  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.619644  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.119516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.619418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.120115  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.619593  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.621639  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:25.622189  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:25.622224  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:25.622129  993471 retry.go:31] will retry after 2.47317901s: waiting for machine to come up
	I0314 19:25:28.097250  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:28.097610  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:28.097640  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:28.097554  993471 retry.go:31] will retry after 2.923437953s: waiting for machine to come up
	I0314 19:25:27.044437  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.729256  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.729296  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:29.729321  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.752124  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.752162  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:30.044560  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.049804  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.049846  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:30.544454  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.558197  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.558237  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.043868  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.050468  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:31.050497  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.544657  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.549640  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:25:31.561049  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:25:31.561080  992563 api_server.go:131] duration metric: took 5.017362991s to wait for apiserver health ...
	I0314 19:25:31.561091  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:31.561101  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:31.563012  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:31.564434  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:25:31.594766  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:25:31.618252  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:25:31.632693  992563 system_pods.go:59] 8 kube-system pods found
	I0314 19:25:31.632743  992563 system_pods.go:61] "coredns-5dd5756b68-bkfks" [c4bc8ea9-9a0f-43df-9916-a9a7e42fc4e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:25:31.632752  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [936bfbcb-333a-45db-9cd1-b152c14bc623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:25:31.632758  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [0533b8e8-e66a-4f38-8d55-c813446a4406] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:25:31.632768  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [b31998b9-b575-430b-918e-b9c4a7c626d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:25:31.632773  992563 system_pods.go:61] "kube-proxy-249fd" [f8bafea7-bc78-4e48-ad55-3b913c3e2fd1] Running
	I0314 19:25:31.632778  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [99c2fc5a-61a5-4813-9042-dac771932708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:25:31.632786  992563 system_pods.go:61] "metrics-server-57f55c9bc5-t2hhv" [03b6608b-bea1-4605-b85d-c09f2c744118] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:25:31.632801  992563 system_pods.go:61] "storage-provisioner" [ec6d3122-f9c6-4f14-bc66-7cab18b88fa5] Running
	I0314 19:25:31.632811  992563 system_pods.go:74] duration metric: took 14.536847ms to wait for pod list to return data ...
	I0314 19:25:31.632818  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:25:31.636580  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:25:31.636606  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:25:31.636618  992563 node_conditions.go:105] duration metric: took 3.793367ms to run NodePressure ...
	I0314 19:25:31.636635  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:28.907100  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:30.908031  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:29.119861  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:29.620287  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.120113  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.619452  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.120315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.619667  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.120221  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.120292  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.619449  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.022404  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:31.022914  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:31.022950  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:31.022850  993471 retry.go:31] will retry after 4.138449888s: waiting for machine to come up
	I0314 19:25:31.874889  992563 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879729  992563 kubeadm.go:733] kubelet initialised
	I0314 19:25:31.879757  992563 kubeadm.go:734] duration metric: took 4.834353ms waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879768  992563 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:25:31.884949  992563 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.890443  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890467  992563 pod_ready.go:81] duration metric: took 5.495766ms for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.890475  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890485  992563 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.895241  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895275  992563 pod_ready.go:81] duration metric: took 4.778217ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.895289  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895300  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.900184  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900204  992563 pod_ready.go:81] duration metric: took 4.895049ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.900222  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900228  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.023193  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023224  992563 pod_ready.go:81] duration metric: took 122.987086ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:32.023236  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023242  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423939  992563 pod_ready.go:92] pod "kube-proxy-249fd" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:32.423972  992563 pod_ready.go:81] duration metric: took 400.720648ms for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423988  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:34.431140  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:36.432871  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:33.408652  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.906834  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:37.914444  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.165792  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166342  991880 main.go:141] libmachine: (no-preload-731976) Found IP for machine: 192.168.39.148
	I0314 19:25:35.166372  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has current primary IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166382  991880 main.go:141] libmachine: (no-preload-731976) Reserving static IP address...
	I0314 19:25:35.166707  991880 main.go:141] libmachine: (no-preload-731976) Reserved static IP address: 192.168.39.148
	I0314 19:25:35.166727  991880 main.go:141] libmachine: (no-preload-731976) Waiting for SSH to be available...
	I0314 19:25:35.166748  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.166781  991880 main.go:141] libmachine: (no-preload-731976) DBG | skip adding static IP to network mk-no-preload-731976 - found existing host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"}
	I0314 19:25:35.166800  991880 main.go:141] libmachine: (no-preload-731976) DBG | Getting to WaitForSSH function...
	I0314 19:25:35.169377  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169760  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.169795  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169926  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH client type: external
	I0314 19:25:35.169960  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa (-rw-------)
	I0314 19:25:35.169998  991880 main.go:141] libmachine: (no-preload-731976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:35.170022  991880 main.go:141] libmachine: (no-preload-731976) DBG | About to run SSH command:
	I0314 19:25:35.170036  991880 main.go:141] libmachine: (no-preload-731976) DBG | exit 0
	I0314 19:25:35.296417  991880 main.go:141] libmachine: (no-preload-731976) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:35.296801  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetConfigRaw
	I0314 19:25:35.297596  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.300253  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300720  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.300757  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300996  991880 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/config.json ...
	I0314 19:25:35.301205  991880 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:35.301229  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:35.301493  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.304165  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304600  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.304643  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304850  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.305119  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305292  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305468  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.305627  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.305863  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.305881  991880 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:35.421933  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:35.421969  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422269  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:25:35.422303  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422516  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.425530  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426039  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.426069  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.426476  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426646  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426807  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.426997  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.427179  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.427200  991880 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-731976 && echo "no-preload-731976" | sudo tee /etc/hostname
	I0314 19:25:35.558170  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-731976
	
	I0314 19:25:35.558216  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.561575  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562028  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.562059  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562372  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.562673  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.562874  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.563059  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.563234  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.563468  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.563495  991880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-731976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-731976/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-731976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:35.691282  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:35.691321  991880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:35.691412  991880 buildroot.go:174] setting up certificates
	I0314 19:25:35.691437  991880 provision.go:84] configureAuth start
	I0314 19:25:35.691454  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.691821  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.694807  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695223  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.695255  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695385  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.698118  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698519  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.698548  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698752  991880 provision.go:143] copyHostCerts
	I0314 19:25:35.698834  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:35.698872  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:35.698922  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:35.699019  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:35.699030  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:35.699051  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:35.699114  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:35.699156  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:35.699177  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:35.699240  991880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.no-preload-731976 san=[127.0.0.1 192.168.39.148 localhost minikube no-preload-731976]
	I0314 19:25:35.915177  991880 provision.go:177] copyRemoteCerts
	I0314 19:25:35.915240  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:35.915265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.918112  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918468  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.918499  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918607  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.918813  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.918989  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.919161  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.003712  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 19:25:36.037023  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:36.068063  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:36.101448  991880 provision.go:87] duration metric: took 409.997228ms to configureAuth
	I0314 19:25:36.101475  991880 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:36.101691  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:25:36.101783  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.104700  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.105138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105310  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.105536  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105733  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105885  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.106088  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.106325  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.106345  991880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:36.387809  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:36.387841  991880 machine.go:97] duration metric: took 1.086620225s to provisionDockerMachine
	I0314 19:25:36.387855  991880 start.go:293] postStartSetup for "no-preload-731976" (driver="kvm2")
	I0314 19:25:36.387869  991880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:36.387886  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.388286  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:36.388316  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.391292  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391742  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.391774  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391959  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.392203  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.392450  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.392637  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.477050  991880 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:36.482184  991880 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:36.482205  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:36.482270  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:36.482372  991880 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:36.482459  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:36.492716  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:36.520655  991880 start.go:296] duration metric: took 132.783495ms for postStartSetup
	I0314 19:25:36.520723  991880 fix.go:56] duration metric: took 20.459188473s for fixHost
	I0314 19:25:36.520761  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.523718  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.524138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524431  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.524648  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.524842  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.525031  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.525211  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.525425  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.525436  991880 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:36.633356  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444336.610892497
	
	I0314 19:25:36.633389  991880 fix.go:216] guest clock: 1710444336.610892497
	I0314 19:25:36.633400  991880 fix.go:229] Guest: 2024-03-14 19:25:36.610892497 +0000 UTC Remote: 2024-03-14 19:25:36.520738659 +0000 UTC m=+367.687364006 (delta=90.153838ms)
	I0314 19:25:36.633445  991880 fix.go:200] guest clock delta is within tolerance: 90.153838ms
	I0314 19:25:36.633457  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 20.57197992s
	I0314 19:25:36.633490  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.633778  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:36.636556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.636959  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.636991  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.637190  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637708  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637871  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637934  991880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:36.638009  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.638075  991880 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:36.638104  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.640821  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641207  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641236  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641272  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641489  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.641668  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641863  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641961  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.642002  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.642188  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.642394  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.642606  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.753962  991880 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:36.761020  991880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:36.916046  991880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:36.923607  991880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:36.923688  991880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:36.941685  991880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:36.941710  991880 start.go:494] detecting cgroup driver to use...
	I0314 19:25:36.941776  991880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:36.962019  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:36.977917  991880 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:36.977982  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:36.995378  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:37.010859  991880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:37.145828  991880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:37.310805  991880 docker.go:233] disabling docker service ...
	I0314 19:25:37.310893  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:37.327346  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:37.342143  991880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:37.485925  991880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:37.607814  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:37.623068  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:37.644387  991880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:37.644455  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.655919  991880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:37.655992  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.669290  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.681601  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.694022  991880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:37.705793  991880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:37.716260  991880 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:37.716307  991880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:37.732112  991880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:37.749555  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:37.868548  991880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:38.023735  991880 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:38.023821  991880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:38.029414  991880 start.go:562] Will wait 60s for crictl version
	I0314 19:25:38.029481  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.033985  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:38.077012  991880 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:38.077102  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.109155  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.146003  991880 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 19:25:34.119724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:34.620261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.119543  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.620151  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.119893  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.619442  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.119326  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.619427  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.119766  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.619711  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.147344  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:38.149841  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150180  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:38.150217  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150608  991880 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:38.155598  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:38.172048  991880 kubeadm.go:877] updating cluster {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:38.172187  991880 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 19:25:38.172260  991880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:38.220190  991880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 19:25:38.220232  991880 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:25:38.220291  991880 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.220313  991880 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.220345  991880 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.220378  991880 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 19:25:38.220395  991880 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.220486  991880 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.220484  991880 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.220724  991880 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.221960  991880 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 19:25:38.222035  991880 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.222177  991880 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.222230  991880 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.222271  991880 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.222272  991880 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.372514  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 19:25:38.384051  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.388330  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.395017  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.397902  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.409638  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.431681  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.501339  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.590670  991880 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 19:25:38.590775  991880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 19:25:38.590838  991880 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 19:25:38.590853  991880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.590860  991880 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590797  991880 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.591036  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627618  991880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 19:25:38.627667  991880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.627716  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627732  991880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 19:25:38.627769  991880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.627826  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648107  991880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 19:25:38.648128  991880 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648279  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.648277  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.648335  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.648346  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.648374  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.783957  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 19:25:38.784024  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.784071  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.784097  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.784197  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.788609  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788695  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788719  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 19:25:38.788782  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.788797  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:38.788856  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.788931  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.849452  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 19:25:38.849488  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 19:25:38.849505  991880 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849554  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849563  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:38.849617  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 19:25:38.849624  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.849552  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849645  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849672  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849739  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.854753  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 19:25:38.933788  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.435999  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:39.436021  992563 pod_ready.go:81] duration metric: took 7.012025071s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:39.436031  992563 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:41.445517  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:40.407508  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:42.907630  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.120157  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:39.620116  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.119693  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.120192  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.619323  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.119637  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.619724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.619799  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.952723  991880 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (4.102947708s)
	I0314 19:25:42.952761  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 19:25:42.952762  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103172862s)
	I0314 19:25:42.952791  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 19:25:42.952821  991880 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:42.952878  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:43.943582  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.945997  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.407780  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:44.119609  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:44.619260  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.119599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.619665  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.120008  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.619297  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.119435  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.619512  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.119521  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.619320  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.022375  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.069465444s)
	I0314 19:25:45.022413  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 19:25:45.022458  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:45.022539  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:48.091412  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.068839048s)
	I0314 19:25:48.091449  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 19:25:48.091482  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.091536  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.451322  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.944057  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:47.957506  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.408381  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:52.906494  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:49.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.619796  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.120279  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.619408  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.120076  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.619516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.119566  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.620268  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.120329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.619847  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.657504  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.565934426s)
	I0314 19:25:49.657542  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 19:25:49.657578  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:49.657646  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:52.134720  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477038469s)
	I0314 19:25:52.134760  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 19:25:52.134794  991880 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:52.134888  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:53.095193  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 19:25:53.095258  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.095337  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.447791  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:55.944276  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.907556  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:57.406376  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.119981  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.620180  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.119616  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.619375  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.119240  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.619922  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.120288  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.119329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.620315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.949310  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.853945012s)
	I0314 19:25:54.949346  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 19:25:54.949374  991880 cache_images.go:123] Successfully loaded all cached images
	I0314 19:25:54.949385  991880 cache_images.go:92] duration metric: took 16.729134981s to LoadCachedImages
	I0314 19:25:54.949398  991880 kubeadm.go:928] updating node { 192.168.39.148 8443 v1.29.0-rc.2 crio true true} ...
	I0314 19:25:54.949542  991880 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-731976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:54.949667  991880 ssh_runner.go:195] Run: crio config
	I0314 19:25:55.001838  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:25:55.001869  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:55.001885  991880 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:55.001916  991880 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.148 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-731976 NodeName:no-preload-731976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:55.002121  991880 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-731976"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:55.002212  991880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 19:25:55.014769  991880 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:55.014842  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:55.026082  991880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 19:25:55.049071  991880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 19:25:55.071131  991880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 19:25:55.093566  991880 ssh_runner.go:195] Run: grep 192.168.39.148	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:55.098332  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:55.113424  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:55.260159  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:55.283145  991880 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976 for IP: 192.168.39.148
	I0314 19:25:55.283174  991880 certs.go:194] generating shared ca certs ...
	I0314 19:25:55.283197  991880 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:55.283377  991880 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:55.283441  991880 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:55.283455  991880 certs.go:256] generating profile certs ...
	I0314 19:25:55.283564  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.key
	I0314 19:25:55.283661  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key.5587cb42
	I0314 19:25:55.283720  991880 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key
	I0314 19:25:55.283895  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:55.283948  991880 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:55.283962  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:55.283993  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:55.284031  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:55.284066  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:55.284121  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:55.284976  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:55.326779  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:55.376167  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:55.405828  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:55.458807  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 19:25:55.494051  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:55.531015  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:55.559184  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:55.588905  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:55.616661  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:55.646728  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:55.673995  991880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:55.692276  991880 ssh_runner.go:195] Run: openssl version
	I0314 19:25:55.698918  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:55.711703  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717107  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717177  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.723435  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:55.736575  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:55.749982  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755614  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755680  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.762122  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:55.774447  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:55.786787  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791855  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791901  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.798041  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:55.810324  991880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:55.815698  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:55.822389  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:55.829046  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:55.835660  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:55.843075  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:55.849353  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:55.855678  991880 kubeadm.go:391] StartCluster: {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:55.855799  991880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:55.855834  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.906341  991880 cri.go:89] found id: ""
	I0314 19:25:55.906408  991880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:55.918790  991880 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:55.918819  991880 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:55.918826  991880 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:55.918875  991880 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:55.929988  991880 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:55.931422  991880 kubeconfig.go:125] found "no-preload-731976" server: "https://192.168.39.148:8443"
	I0314 19:25:55.933865  991880 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:55.946711  991880 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.148
	I0314 19:25:55.946743  991880 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:55.946757  991880 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:55.946812  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.998884  991880 cri.go:89] found id: ""
	I0314 19:25:55.998971  991880 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:56.018919  991880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:56.030467  991880 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:56.030497  991880 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:56.030558  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:56.041403  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:56.041465  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:56.052140  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:56.062366  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:56.062420  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:56.075847  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.086246  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:56.086295  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.097148  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:56.106718  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:56.106756  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:56.118337  991880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:56.131893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:56.264399  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.282634  991880 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.018196302s)
	I0314 19:25:57.282664  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.524172  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.626554  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.772151  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:57.772255  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.273445  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.772397  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.803788  991880 api_server.go:72] duration metric: took 1.031637073s to wait for apiserver process to appear ...
	I0314 19:25:58.803816  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:58.803835  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:25:58.804392  991880 api_server.go:269] stopped: https://192.168.39.148:8443/healthz: Get "https://192.168.39.148:8443/healthz": dial tcp 192.168.39.148:8443: connect: connection refused
	I0314 19:25:58.445134  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:00.447429  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.304059  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.588183  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.588231  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.588251  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.632993  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.633030  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.804404  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.862306  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:01.862370  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.304525  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.309902  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:02.309933  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.804296  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.812245  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:26:02.830235  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:26:02.830268  991880 api_server.go:131] duration metric: took 4.026443836s to wait for apiserver health ...
	I0314 19:26:02.830281  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:26:02.830289  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:26:02.832051  991880 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:59.407314  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:01.906570  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.120306  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:59.620183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.119877  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.619283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.119314  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.620175  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:02.120113  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:02.120198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:02.173354  992344 cri.go:89] found id: ""
	I0314 19:26:02.173388  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.173421  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:02.173430  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:02.173509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:02.213519  992344 cri.go:89] found id: ""
	I0314 19:26:02.213555  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.213567  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:02.213574  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:02.213689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:02.259387  992344 cri.go:89] found id: ""
	I0314 19:26:02.259423  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.259435  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:02.259443  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:02.259511  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:02.308335  992344 cri.go:89] found id: ""
	I0314 19:26:02.308362  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.308373  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:02.308381  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:02.308441  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:02.353065  992344 cri.go:89] found id: ""
	I0314 19:26:02.353092  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.353101  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:02.353106  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:02.353183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:02.394305  992344 cri.go:89] found id: ""
	I0314 19:26:02.394342  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.394355  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:02.394365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:02.394443  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:02.441693  992344 cri.go:89] found id: ""
	I0314 19:26:02.441731  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.441743  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:02.441751  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:02.441816  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:02.479786  992344 cri.go:89] found id: ""
	I0314 19:26:02.479810  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.479818  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:02.479827  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:02.479858  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:02.494835  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:02.494865  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:02.660069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:02.660114  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:02.660134  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:02.732148  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:02.732187  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:02.780910  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:02.780942  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:02.833411  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:26:02.852441  991880 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:26:02.875033  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:26:02.891957  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:26:02.892003  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:26:02.892016  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:26:02.892034  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:26:02.892045  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:26:02.892062  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:26:02.892072  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:26:02.892087  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:26:02.892098  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:26:02.892109  991880 system_pods.go:74] duration metric: took 17.053651ms to wait for pod list to return data ...
	I0314 19:26:02.892122  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:26:02.896049  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:26:02.896076  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:26:02.896087  991880 node_conditions.go:105] duration metric: took 3.958558ms to run NodePressure ...
	I0314 19:26:02.896104  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:26:03.183167  991880 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187696  991880 kubeadm.go:733] kubelet initialised
	I0314 19:26:03.187722  991880 kubeadm.go:734] duration metric: took 4.517639ms waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187734  991880 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:03.193263  991880 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.198068  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198100  991880 pod_ready.go:81] duration metric: took 4.803067ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.198112  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198125  991880 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.202418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202440  991880 pod_ready.go:81] duration metric: took 4.299898ms for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.202453  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202458  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.207418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207440  991880 pod_ready.go:81] duration metric: took 4.975588ms for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.207447  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207453  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.278880  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278907  991880 pod_ready.go:81] duration metric: took 71.446692ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.278916  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278922  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.679262  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679298  991880 pod_ready.go:81] duration metric: took 400.3668ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.679308  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679315  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.078953  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.078992  991880 pod_ready.go:81] duration metric: took 399.668454ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.079014  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.079023  991880 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.479041  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479069  991880 pod_ready.go:81] duration metric: took 400.034338ms for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.479078  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479084  991880 pod_ready.go:38] duration metric: took 1.291340313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:04.479109  991880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:26:04.493423  991880 ops.go:34] apiserver oom_adj: -16
	I0314 19:26:04.493444  991880 kubeadm.go:591] duration metric: took 8.574611355s to restartPrimaryControlPlane
	I0314 19:26:04.493451  991880 kubeadm.go:393] duration metric: took 8.63778247s to StartCluster
	I0314 19:26:04.493495  991880 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.493576  991880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:26:04.495275  991880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.495648  991880 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:26:04.497346  991880 out.go:177] * Verifying Kubernetes components...
	I0314 19:26:04.495716  991880 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:26:04.495843  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:26:04.498678  991880 addons.go:69] Setting storage-provisioner=true in profile "no-preload-731976"
	I0314 19:26:04.498694  991880 addons.go:69] Setting metrics-server=true in profile "no-preload-731976"
	I0314 19:26:04.498719  991880 addons.go:234] Setting addon metrics-server=true in "no-preload-731976"
	I0314 19:26:04.498724  991880 addons.go:234] Setting addon storage-provisioner=true in "no-preload-731976"
	W0314 19:26:04.498725  991880 addons.go:243] addon metrics-server should already be in state true
	W0314 19:26:04.498735  991880 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:26:04.498755  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498764  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498685  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:26:04.498684  991880 addons.go:69] Setting default-storageclass=true in profile "no-preload-731976"
	I0314 19:26:04.498902  991880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-731976"
	I0314 19:26:04.499116  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499128  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499146  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499151  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499275  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499306  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.515926  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42749
	I0314 19:26:04.516541  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.517336  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.517380  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.517903  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.518496  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.518530  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.519804  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0314 19:26:04.519877  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
	I0314 19:26:04.520294  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520433  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520844  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.520874  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521224  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.521279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.521294  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521512  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.521839  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.522431  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.522462  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.524933  991880 addons.go:234] Setting addon default-storageclass=true in "no-preload-731976"
	W0314 19:26:04.524953  991880 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:26:04.524977  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.525238  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.525267  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.535073  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I0314 19:26:04.535555  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.536084  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.536110  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.536455  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.536608  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.537991  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0314 19:26:04.538320  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.538560  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.540272  991880 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:26:04.539087  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.541445  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.541556  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:26:04.541574  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:26:04.541590  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.541837  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.542000  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.544178  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.544425  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0314 19:26:04.545832  991880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:26:04.544882  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.544974  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.545887  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.545912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.545529  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.547028  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.547051  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.546085  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.547153  991880 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.547187  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:26:04.547205  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.547263  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.547420  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.547492  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.548137  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.548250  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.549851  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550280  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.550310  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550441  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.550642  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.550806  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.550933  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.594092  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0314 19:26:04.594507  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.595046  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.595068  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.595380  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.595600  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.597532  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.597819  991880 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.597841  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:26:04.597860  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.600392  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600790  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.600822  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600932  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.601112  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.601282  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.601422  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.698561  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:26:04.717893  991880 node_ready.go:35] waiting up to 6m0s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:04.789158  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.874271  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:26:04.874299  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:26:04.897643  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.915424  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:26:04.915447  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:26:04.962912  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:04.962936  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:26:05.037223  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:05.140432  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140464  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.140791  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.140832  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.140858  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140873  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.141237  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.141256  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.141341  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:05.147523  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.147539  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.147796  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.147815  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.021360  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.123678174s)
	I0314 19:26:06.021425  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.021439  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023327  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.023341  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.023364  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023384  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.023398  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023662  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.025042  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023698  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.063870  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026588061s)
	I0314 19:26:06.063950  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.063961  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064301  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.064325  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064381  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064395  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.064404  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064668  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064685  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064698  991880 addons.go:470] Verifying addon metrics-server=true in "no-preload-731976"
	I0314 19:26:06.066642  991880 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:26:02.945917  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.446120  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:03.906603  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.908049  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.359638  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:05.377722  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:05.377799  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:05.436279  992344 cri.go:89] found id: ""
	I0314 19:26:05.436316  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.436330  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:05.436338  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:05.436402  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:05.482775  992344 cri.go:89] found id: ""
	I0314 19:26:05.482822  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.482853  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:05.482861  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:05.482933  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:05.542954  992344 cri.go:89] found id: ""
	I0314 19:26:05.542986  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.542996  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:05.543003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:05.543069  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:05.582596  992344 cri.go:89] found id: ""
	I0314 19:26:05.582630  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.582643  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:05.582651  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:05.582716  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:05.623720  992344 cri.go:89] found id: ""
	I0314 19:26:05.623750  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.623762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:05.623770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:05.623828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:05.669868  992344 cri.go:89] found id: ""
	I0314 19:26:05.669946  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.669962  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:05.669974  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:05.670045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:05.718786  992344 cri.go:89] found id: ""
	I0314 19:26:05.718816  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.718827  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:05.718834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:05.718905  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:05.761781  992344 cri.go:89] found id: ""
	I0314 19:26:05.761817  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.761828  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:05.761841  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:05.761856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:05.826095  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:05.826131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:05.842893  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:05.842928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:05.937536  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:05.937567  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:05.937585  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:06.013419  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:06.013465  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:08.560995  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:08.576897  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:08.576964  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:08.617367  992344 cri.go:89] found id: ""
	I0314 19:26:08.617395  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.617406  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:08.617412  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:08.617471  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:08.655448  992344 cri.go:89] found id: ""
	I0314 19:26:08.655480  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.655492  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:08.655498  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:08.656004  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:08.696167  992344 cri.go:89] found id: ""
	I0314 19:26:08.696197  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.696206  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:08.696231  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:08.696294  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:06.067992  991880 addons.go:505] duration metric: took 1.572277081s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:26:06.722889  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:07.943306  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:09.945181  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.407517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:10.908715  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.736061  992344 cri.go:89] found id: ""
	I0314 19:26:08.736088  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.736096  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:08.736102  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:08.736168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:08.782458  992344 cri.go:89] found id: ""
	I0314 19:26:08.782490  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.782501  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:08.782508  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:08.782585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:08.833616  992344 cri.go:89] found id: ""
	I0314 19:26:08.833647  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.833659  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:08.833667  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:08.833734  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:08.875871  992344 cri.go:89] found id: ""
	I0314 19:26:08.875900  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.875909  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:08.875914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:08.875972  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:08.921763  992344 cri.go:89] found id: ""
	I0314 19:26:08.921793  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.921804  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:08.921816  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:08.921834  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:08.937716  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:08.937748  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:09.024271  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.024295  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:09.024309  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:09.098600  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:09.098636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:09.146178  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:09.146226  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:11.698261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:11.715209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:11.715285  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:11.758631  992344 cri.go:89] found id: ""
	I0314 19:26:11.758664  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.758680  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:11.758688  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:11.758758  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:11.798229  992344 cri.go:89] found id: ""
	I0314 19:26:11.798258  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.798268  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:11.798274  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:11.798341  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:11.838801  992344 cri.go:89] found id: ""
	I0314 19:26:11.838837  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.838849  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:11.838857  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:11.838925  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:11.884460  992344 cri.go:89] found id: ""
	I0314 19:26:11.884495  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.884507  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:11.884515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:11.884577  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:11.937743  992344 cri.go:89] found id: ""
	I0314 19:26:11.937770  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.937781  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:11.937789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:11.937852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:12.007509  992344 cri.go:89] found id: ""
	I0314 19:26:12.007542  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.007552  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:12.007561  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:12.007640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:12.068478  992344 cri.go:89] found id: ""
	I0314 19:26:12.068514  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.068523  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:12.068529  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:12.068592  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:12.108658  992344 cri.go:89] found id: ""
	I0314 19:26:12.108699  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.108712  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:12.108725  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:12.108754  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:12.195134  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:12.195170  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:12.240710  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:12.240746  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:12.297470  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:12.297506  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:12.312552  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:12.312581  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:12.392069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.222189  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:11.223717  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:12.226297  991880 node_ready.go:49] node "no-preload-731976" has status "Ready":"True"
	I0314 19:26:12.226328  991880 node_ready.go:38] duration metric: took 7.508398002s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:12.226343  991880 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:12.234015  991880 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242287  991880 pod_ready.go:92] pod "coredns-76f75df574-mcddh" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:12.242314  991880 pod_ready.go:81] duration metric: took 8.261811ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242325  991880 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252237  991880 pod_ready.go:92] pod "etcd-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:13.252268  991880 pod_ready.go:81] duration metric: took 1.00993426s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252277  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.443709  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.943804  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:13.407905  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:15.906891  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.907361  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.893036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:14.909532  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:14.909603  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:14.958974  992344 cri.go:89] found id: ""
	I0314 19:26:14.959001  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.959010  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:14.959016  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:14.959071  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:14.996462  992344 cri.go:89] found id: ""
	I0314 19:26:14.996496  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.996509  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:14.996516  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:14.996584  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:15.038159  992344 cri.go:89] found id: ""
	I0314 19:26:15.038192  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.038200  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:15.038214  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:15.038280  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:15.077455  992344 cri.go:89] found id: ""
	I0314 19:26:15.077486  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.077498  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:15.077506  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:15.077595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:15.117873  992344 cri.go:89] found id: ""
	I0314 19:26:15.117905  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.117914  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:15.117921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:15.117984  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:15.156493  992344 cri.go:89] found id: ""
	I0314 19:26:15.156528  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.156541  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:15.156549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:15.156615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:15.195036  992344 cri.go:89] found id: ""
	I0314 19:26:15.195065  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.195073  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:15.195079  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:15.195131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:15.237570  992344 cri.go:89] found id: ""
	I0314 19:26:15.237607  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.237619  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:15.237631  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:15.237646  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:15.323818  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:15.323871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:15.370068  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:15.370110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:15.425984  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:15.426018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.442475  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:15.442513  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:15.519714  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.019937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:18.036457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:18.036534  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:18.076226  992344 cri.go:89] found id: ""
	I0314 19:26:18.076256  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.076268  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:18.076275  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:18.076339  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:18.112355  992344 cri.go:89] found id: ""
	I0314 19:26:18.112390  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.112401  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:18.112409  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:18.112475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:18.148502  992344 cri.go:89] found id: ""
	I0314 19:26:18.148533  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.148544  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:18.148551  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:18.148625  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:18.185085  992344 cri.go:89] found id: ""
	I0314 19:26:18.185114  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.185121  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:18.185127  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:18.185192  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:18.226487  992344 cri.go:89] found id: ""
	I0314 19:26:18.226512  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.226520  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:18.226527  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:18.226595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:18.274014  992344 cri.go:89] found id: ""
	I0314 19:26:18.274044  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.274053  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:18.274062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:18.274155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:18.318696  992344 cri.go:89] found id: ""
	I0314 19:26:18.318729  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.318741  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:18.318749  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:18.318821  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:18.361430  992344 cri.go:89] found id: ""
	I0314 19:26:18.361459  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.361467  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:18.361477  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:18.361489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:18.442041  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.442062  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:18.442082  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:18.522821  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:18.522863  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:18.565896  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:18.565935  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:18.620887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:18.620924  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.268738  991880 pod_ready.go:102] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:16.758759  991880 pod_ready.go:92] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.758794  991880 pod_ready.go:81] duration metric: took 3.50650262s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.758807  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.763984  991880 pod_ready.go:92] pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.764010  991880 pod_ready.go:81] duration metric: took 5.192518ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.764021  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770418  991880 pod_ready.go:92] pod "kube-proxy-fkn7b" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.770442  991880 pod_ready.go:81] duration metric: took 6.412988ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770453  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775342  991880 pod_ready.go:92] pod "kube-scheduler-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.775367  991880 pod_ready.go:81] duration metric: took 4.906261ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775378  991880 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:18.782444  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.443755  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.446058  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.907866  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.407195  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.136379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:21.153065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:21.153159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:21.198345  992344 cri.go:89] found id: ""
	I0314 19:26:21.198376  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.198386  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:21.198393  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:21.198465  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:21.240699  992344 cri.go:89] found id: ""
	I0314 19:26:21.240738  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.240747  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:21.240753  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:21.240805  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:21.280891  992344 cri.go:89] found id: ""
	I0314 19:26:21.280978  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.280994  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:21.281004  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:21.281074  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:21.320316  992344 cri.go:89] found id: ""
	I0314 19:26:21.320348  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.320360  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:21.320369  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:21.320428  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:21.367972  992344 cri.go:89] found id: ""
	I0314 19:26:21.368006  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.368018  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:21.368024  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:21.368091  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:21.406060  992344 cri.go:89] found id: ""
	I0314 19:26:21.406090  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.406101  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:21.406108  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:21.406175  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:21.450885  992344 cri.go:89] found id: ""
	I0314 19:26:21.450908  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.450927  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:21.450933  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:21.450992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:21.497391  992344 cri.go:89] found id: ""
	I0314 19:26:21.497424  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.497436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:21.497453  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:21.497471  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:21.547789  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:21.547819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:21.604433  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:21.604482  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:21.619977  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:21.620005  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:21.695604  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:21.695629  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:21.695643  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:20.782765  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.786856  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.943234  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:23.943336  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:25.944005  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.407901  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:26.906562  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.274618  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:24.290815  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:24.290891  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:24.330657  992344 cri.go:89] found id: ""
	I0314 19:26:24.330694  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.330706  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:24.330718  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:24.330788  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:24.373140  992344 cri.go:89] found id: ""
	I0314 19:26:24.373192  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.373206  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:24.373214  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:24.373295  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:24.412131  992344 cri.go:89] found id: ""
	I0314 19:26:24.412161  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.412183  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:24.412191  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:24.412281  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:24.453506  992344 cri.go:89] found id: ""
	I0314 19:26:24.453535  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.453546  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:24.453554  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:24.453621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:24.495345  992344 cri.go:89] found id: ""
	I0314 19:26:24.495379  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.495391  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:24.495399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:24.495468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:24.534744  992344 cri.go:89] found id: ""
	I0314 19:26:24.534770  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.534779  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:24.534785  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:24.534847  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:24.573594  992344 cri.go:89] found id: ""
	I0314 19:26:24.573621  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.573629  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:24.573635  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:24.573685  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:24.612677  992344 cri.go:89] found id: ""
	I0314 19:26:24.612708  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.612718  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:24.612730  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:24.612747  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:24.664393  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:24.664426  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:24.679911  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:24.679945  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:24.767513  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:24.767560  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:24.767580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:24.853448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:24.853491  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.398576  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:27.414665  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:27.414749  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:27.461901  992344 cri.go:89] found id: ""
	I0314 19:26:27.461930  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.461938  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:27.461944  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:27.462009  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:27.502865  992344 cri.go:89] found id: ""
	I0314 19:26:27.502893  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.502902  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:27.502908  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:27.502966  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:27.542327  992344 cri.go:89] found id: ""
	I0314 19:26:27.542374  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.542387  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:27.542396  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:27.542484  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:27.583269  992344 cri.go:89] found id: ""
	I0314 19:26:27.583295  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.583304  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:27.583310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:27.583375  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:27.620426  992344 cri.go:89] found id: ""
	I0314 19:26:27.620467  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.620483  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:27.620491  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:27.620560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:27.659165  992344 cri.go:89] found id: ""
	I0314 19:26:27.659198  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.659214  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:27.659222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:27.659291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:27.701565  992344 cri.go:89] found id: ""
	I0314 19:26:27.701600  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.701609  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:27.701615  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:27.701706  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:27.739782  992344 cri.go:89] found id: ""
	I0314 19:26:27.739813  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.739822  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:27.739832  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:27.739847  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:27.757112  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:27.757146  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:27.844634  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:27.844670  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:27.844688  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:27.928687  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:27.928720  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.976582  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:27.976614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:25.282663  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:27.783551  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.443159  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.943660  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.908305  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.908486  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.536573  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:30.551552  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:30.551624  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.590498  992344 cri.go:89] found id: ""
	I0314 19:26:30.590528  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.590541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:30.590550  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:30.590612  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:30.629891  992344 cri.go:89] found id: ""
	I0314 19:26:30.629922  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.629945  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:30.629960  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:30.630031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:30.672557  992344 cri.go:89] found id: ""
	I0314 19:26:30.672592  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.672604  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:30.672611  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:30.672675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:30.709889  992344 cri.go:89] found id: ""
	I0314 19:26:30.709998  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.710026  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:30.710034  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:30.710103  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:30.749044  992344 cri.go:89] found id: ""
	I0314 19:26:30.749078  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.749090  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:30.749097  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:30.749167  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:30.794111  992344 cri.go:89] found id: ""
	I0314 19:26:30.794136  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.794146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:30.794154  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:30.794229  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:30.837175  992344 cri.go:89] found id: ""
	I0314 19:26:30.837204  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.837213  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:30.837220  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:30.837276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:30.875977  992344 cri.go:89] found id: ""
	I0314 19:26:30.876012  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.876026  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:30.876039  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:30.876077  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:30.965922  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:30.965963  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:31.011002  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:31.011041  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:31.067381  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:31.067415  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:31.082515  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:31.082547  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:31.158951  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:33.659376  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:33.673829  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:33.673889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.283175  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:32.285501  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.446301  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.942963  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.407396  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.906104  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.718619  992344 cri.go:89] found id: ""
	I0314 19:26:33.718655  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.718667  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:33.718675  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:33.718752  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:33.760408  992344 cri.go:89] found id: ""
	I0314 19:26:33.760443  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.760455  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:33.760463  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:33.760532  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:33.803648  992344 cri.go:89] found id: ""
	I0314 19:26:33.803683  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.803697  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:33.803706  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:33.803770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:33.845297  992344 cri.go:89] found id: ""
	I0314 19:26:33.845332  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.845344  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:33.845352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:33.845420  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:33.885826  992344 cri.go:89] found id: ""
	I0314 19:26:33.885862  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.885873  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:33.885881  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:33.885953  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:33.930611  992344 cri.go:89] found id: ""
	I0314 19:26:33.930641  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.930652  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:33.930659  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:33.930720  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:33.975523  992344 cri.go:89] found id: ""
	I0314 19:26:33.975558  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.975569  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:33.975592  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:33.975671  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:34.021004  992344 cri.go:89] found id: ""
	I0314 19:26:34.021039  992344 logs.go:276] 0 containers: []
	W0314 19:26:34.021048  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:34.021058  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:34.021072  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.066775  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:34.066808  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:34.123513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:34.123555  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:34.138355  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:34.138390  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:34.210698  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:34.210733  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:34.210752  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:36.801398  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:36.818486  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:36.818561  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:36.864485  992344 cri.go:89] found id: ""
	I0314 19:26:36.864510  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.864519  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:36.864525  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:36.864585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:36.908438  992344 cri.go:89] found id: ""
	I0314 19:26:36.908468  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.908478  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:36.908486  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:36.908554  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:36.947578  992344 cri.go:89] found id: ""
	I0314 19:26:36.947605  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.947613  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:36.947618  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:36.947664  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:36.985495  992344 cri.go:89] found id: ""
	I0314 19:26:36.985526  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.985537  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:36.985545  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:36.985609  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:37.027897  992344 cri.go:89] found id: ""
	I0314 19:26:37.027929  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.027947  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:37.027955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:37.028024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:37.066665  992344 cri.go:89] found id: ""
	I0314 19:26:37.066702  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.066716  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:37.066726  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:37.066818  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:37.104882  992344 cri.go:89] found id: ""
	I0314 19:26:37.104911  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.104920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:37.104926  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:37.104989  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:37.150288  992344 cri.go:89] found id: ""
	I0314 19:26:37.150318  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.150326  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:37.150338  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:37.150356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:37.207269  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:37.207314  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:37.222256  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:37.222290  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:37.305854  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:37.305879  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:37.305894  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:37.391306  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:37.391343  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.784650  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:37.283602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.444420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.943754  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.406563  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.407414  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.905944  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:39.939379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:39.955255  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:39.955317  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:39.996585  992344 cri.go:89] found id: ""
	I0314 19:26:39.996618  992344 logs.go:276] 0 containers: []
	W0314 19:26:39.996627  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:39.996633  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:39.996698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:40.038725  992344 cri.go:89] found id: ""
	I0314 19:26:40.038761  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.038774  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:40.038782  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:40.038846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:40.080619  992344 cri.go:89] found id: ""
	I0314 19:26:40.080656  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.080668  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:40.080677  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:40.080742  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:40.122120  992344 cri.go:89] found id: ""
	I0314 19:26:40.122163  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.122174  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:40.122182  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:40.122248  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:40.161563  992344 cri.go:89] found id: ""
	I0314 19:26:40.161594  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.161605  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:40.161612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:40.161680  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:40.200236  992344 cri.go:89] found id: ""
	I0314 19:26:40.200267  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.200278  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:40.200287  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:40.200358  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:40.237537  992344 cri.go:89] found id: ""
	I0314 19:26:40.237570  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.237581  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:40.237588  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:40.237657  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:40.293038  992344 cri.go:89] found id: ""
	I0314 19:26:40.293070  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.293078  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:40.293086  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:40.293110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:40.307710  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:40.307742  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:40.385255  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:40.385278  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:40.385312  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:40.469385  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:40.469421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:40.513030  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:40.513064  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.069286  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:43.086066  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:43.086183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:43.131373  992344 cri.go:89] found id: ""
	I0314 19:26:43.131400  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.131408  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:43.131414  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:43.131491  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:43.175283  992344 cri.go:89] found id: ""
	I0314 19:26:43.175311  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.175319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:43.175325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:43.175385  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:43.214979  992344 cri.go:89] found id: ""
	I0314 19:26:43.215006  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.215014  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:43.215020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:43.215072  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:43.252071  992344 cri.go:89] found id: ""
	I0314 19:26:43.252101  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.252110  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:43.252136  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:43.252200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:43.290310  992344 cri.go:89] found id: ""
	I0314 19:26:43.290341  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.290352  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:43.290359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:43.290426  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:43.330639  992344 cri.go:89] found id: ""
	I0314 19:26:43.330673  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.330684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:43.330692  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:43.330761  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:43.372669  992344 cri.go:89] found id: ""
	I0314 19:26:43.372698  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.372706  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:43.372712  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:43.372775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:43.416118  992344 cri.go:89] found id: ""
	I0314 19:26:43.416154  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.416171  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:43.416184  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:43.416225  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:43.501495  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:43.501541  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:43.545898  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:43.545932  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.601172  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:43.601205  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:43.616307  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:43.616339  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:43.699003  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:39.782316  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:41.783130  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:43.783259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.943853  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:45.446214  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:44.907328  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.406383  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:46.199661  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:46.214256  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:46.214325  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:46.263891  992344 cri.go:89] found id: ""
	I0314 19:26:46.263921  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.263932  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:46.263940  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:46.264006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:46.303515  992344 cri.go:89] found id: ""
	I0314 19:26:46.303542  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.303551  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:46.303558  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:46.303634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:46.346323  992344 cri.go:89] found id: ""
	I0314 19:26:46.346358  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.346371  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:46.346378  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:46.346444  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:46.388459  992344 cri.go:89] found id: ""
	I0314 19:26:46.388490  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.388500  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:46.388507  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:46.388560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:46.428907  992344 cri.go:89] found id: ""
	I0314 19:26:46.428945  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.428957  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:46.428966  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:46.429032  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:46.475683  992344 cri.go:89] found id: ""
	I0314 19:26:46.475713  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.475724  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:46.475737  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:46.475803  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:46.514509  992344 cri.go:89] found id: ""
	I0314 19:26:46.514543  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.514552  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:46.514558  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:46.514621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:46.553984  992344 cri.go:89] found id: ""
	I0314 19:26:46.554012  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.554023  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:46.554036  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:46.554054  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:46.615513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:46.615548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:46.630491  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:46.630525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:46.733214  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:46.733250  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:46.733267  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:46.832662  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:46.832699  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:45.783361  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:48.283626  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.943882  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.944184  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.409215  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.907278  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.382361  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:49.398159  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:49.398220  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:49.441989  992344 cri.go:89] found id: ""
	I0314 19:26:49.442017  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.442027  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:49.442034  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:49.442110  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:49.484456  992344 cri.go:89] found id: ""
	I0314 19:26:49.484492  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.484503  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:49.484520  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:49.484587  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:49.522409  992344 cri.go:89] found id: ""
	I0314 19:26:49.522438  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.522449  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:49.522456  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:49.522509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:49.556955  992344 cri.go:89] found id: ""
	I0314 19:26:49.556983  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.556991  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:49.556996  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:49.557045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:49.597924  992344 cri.go:89] found id: ""
	I0314 19:26:49.597960  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.597971  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:49.597987  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:49.598054  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:49.635744  992344 cri.go:89] found id: ""
	I0314 19:26:49.635780  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.635793  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:49.635801  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:49.635869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:49.678085  992344 cri.go:89] found id: ""
	I0314 19:26:49.678124  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.678136  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:49.678144  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:49.678247  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:49.714483  992344 cri.go:89] found id: ""
	I0314 19:26:49.714515  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.714527  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:49.714538  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:49.714554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:49.760438  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:49.760473  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:49.818954  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:49.818992  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:49.835609  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:49.835642  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:49.928723  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:49.928747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:49.928759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:52.517455  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:52.534986  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:52.535066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:52.580240  992344 cri.go:89] found id: ""
	I0314 19:26:52.580279  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.580292  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:52.580301  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:52.580367  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:52.644053  992344 cri.go:89] found id: ""
	I0314 19:26:52.644085  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.644096  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:52.644103  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:52.644171  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:52.706892  992344 cri.go:89] found id: ""
	I0314 19:26:52.706919  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.706928  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:52.706935  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:52.706986  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:52.761039  992344 cri.go:89] found id: ""
	I0314 19:26:52.761077  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.761090  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:52.761099  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:52.761173  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:52.806217  992344 cri.go:89] found id: ""
	I0314 19:26:52.806251  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.806263  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:52.806271  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:52.806415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:52.848417  992344 cri.go:89] found id: ""
	I0314 19:26:52.848448  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.848457  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:52.848464  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:52.848527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:52.890639  992344 cri.go:89] found id: ""
	I0314 19:26:52.890674  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.890687  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:52.890695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:52.890775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:52.934637  992344 cri.go:89] found id: ""
	I0314 19:26:52.934666  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.934677  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:52.934690  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:52.934707  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:52.949797  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:52.949825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:53.033720  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:53.033751  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:53.033766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:53.113919  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:53.113960  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:53.163483  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:53.163525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:50.781924  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:52.788346  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.945712  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:54.442871  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.443456  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:53.908184  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.407851  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:55.718119  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:55.733183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:55.733276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:55.778015  992344 cri.go:89] found id: ""
	I0314 19:26:55.778042  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.778050  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:55.778057  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:55.778146  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:55.829955  992344 cri.go:89] found id: ""
	I0314 19:26:55.829996  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.830011  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:55.830019  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:55.830089  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:55.872198  992344 cri.go:89] found id: ""
	I0314 19:26:55.872247  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.872260  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:55.872268  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:55.872327  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:55.916604  992344 cri.go:89] found id: ""
	I0314 19:26:55.916637  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.916649  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:55.916657  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:55.916725  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:55.957028  992344 cri.go:89] found id: ""
	I0314 19:26:55.957051  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.957060  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:55.957065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:55.957118  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:55.996640  992344 cri.go:89] found id: ""
	I0314 19:26:55.996671  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.996684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:55.996695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:55.996750  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:56.036638  992344 cri.go:89] found id: ""
	I0314 19:26:56.036688  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.036701  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:56.036709  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:56.036777  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:56.072594  992344 cri.go:89] found id: ""
	I0314 19:26:56.072624  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.072633  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:56.072643  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:56.072657  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:56.129011  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:56.129044  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:56.143042  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:56.143075  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:56.232545  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:56.232574  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:56.232589  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:56.317471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:56.317517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:55.282413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:57.283169  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.445079  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:00.942781  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.908918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.409159  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.864325  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:58.879029  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:58.879108  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:58.918490  992344 cri.go:89] found id: ""
	I0314 19:26:58.918519  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.918526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:58.918533  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:58.918598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:58.963392  992344 cri.go:89] found id: ""
	I0314 19:26:58.963423  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.963431  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:58.963437  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:58.963502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:59.007104  992344 cri.go:89] found id: ""
	I0314 19:26:59.007146  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.007158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:59.007166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:59.007235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:59.050075  992344 cri.go:89] found id: ""
	I0314 19:26:59.050114  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.050127  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:59.050138  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:59.050204  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:59.090262  992344 cri.go:89] found id: ""
	I0314 19:26:59.090289  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.090298  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:59.090303  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:59.090355  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:59.130556  992344 cri.go:89] found id: ""
	I0314 19:26:59.130584  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.130592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:59.130598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:59.130659  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:59.170640  992344 cri.go:89] found id: ""
	I0314 19:26:59.170670  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.170680  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:59.170689  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:59.170769  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:59.206456  992344 cri.go:89] found id: ""
	I0314 19:26:59.206494  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.206503  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:59.206513  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:59.206533  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:59.285760  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:59.285781  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:59.285793  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:59.363143  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:59.363182  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:59.415614  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:59.415655  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:59.470619  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:59.470661  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:01.987397  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:02.004152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:02.004243  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:02.050022  992344 cri.go:89] found id: ""
	I0314 19:27:02.050056  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.050068  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:02.050075  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:02.050144  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:02.089639  992344 cri.go:89] found id: ""
	I0314 19:27:02.089666  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.089674  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:02.089680  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:02.089740  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:02.128368  992344 cri.go:89] found id: ""
	I0314 19:27:02.128400  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.128409  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:02.128415  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:02.128468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:02.165609  992344 cri.go:89] found id: ""
	I0314 19:27:02.165651  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.165664  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:02.165672  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:02.165745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:02.204317  992344 cri.go:89] found id: ""
	I0314 19:27:02.204347  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.204359  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:02.204367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:02.204436  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:02.247897  992344 cri.go:89] found id: ""
	I0314 19:27:02.247931  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.247943  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:02.247951  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:02.248025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:02.287938  992344 cri.go:89] found id: ""
	I0314 19:27:02.287967  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.287979  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:02.287985  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:02.288057  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:02.324712  992344 cri.go:89] found id: ""
	I0314 19:27:02.324739  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.324751  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:02.324762  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:02.324779  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:02.400908  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:02.400932  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:02.400953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:02.489797  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:02.489830  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:02.540134  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:02.540168  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:02.599093  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:02.599128  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:59.283757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.782785  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:02.946825  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.447529  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:03.906952  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.909530  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.115036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:05.130479  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:05.130562  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:05.174573  992344 cri.go:89] found id: ""
	I0314 19:27:05.174605  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.174617  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:05.174624  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:05.174689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:05.212508  992344 cri.go:89] found id: ""
	I0314 19:27:05.212535  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.212546  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:05.212554  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:05.212621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:05.250714  992344 cri.go:89] found id: ""
	I0314 19:27:05.250750  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.250762  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:05.250770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:05.250839  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:05.291691  992344 cri.go:89] found id: ""
	I0314 19:27:05.291714  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.291722  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:05.291728  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:05.291775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:05.332275  992344 cri.go:89] found id: ""
	I0314 19:27:05.332302  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.332311  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:05.332318  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:05.332384  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:05.370048  992344 cri.go:89] found id: ""
	I0314 19:27:05.370075  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.370084  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:05.370090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:05.370163  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:05.413797  992344 cri.go:89] found id: ""
	I0314 19:27:05.413825  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.413836  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:05.413844  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:05.413909  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:05.454295  992344 cri.go:89] found id: ""
	I0314 19:27:05.454321  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.454329  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:05.454341  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:05.454359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:05.509578  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:05.509614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:05.525317  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:05.525347  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:05.607550  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:05.607576  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:05.607593  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:05.690865  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:05.690904  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.233183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:08.249612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:08.249679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:08.298188  992344 cri.go:89] found id: ""
	I0314 19:27:08.298226  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.298238  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:08.298247  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:08.298310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:08.339102  992344 cri.go:89] found id: ""
	I0314 19:27:08.339132  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.339141  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:08.339148  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:08.339208  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:08.377029  992344 cri.go:89] found id: ""
	I0314 19:27:08.377060  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.377068  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:08.377074  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:08.377131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:08.414418  992344 cri.go:89] found id: ""
	I0314 19:27:08.414450  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.414461  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:08.414468  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:08.414528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:08.454027  992344 cri.go:89] found id: ""
	I0314 19:27:08.454057  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.454068  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:08.454076  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:08.454134  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:08.494818  992344 cri.go:89] found id: ""
	I0314 19:27:08.494847  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.494856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:08.494863  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:08.494927  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:08.534522  992344 cri.go:89] found id: ""
	I0314 19:27:08.534557  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.534567  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:08.534575  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:08.534637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:08.572164  992344 cri.go:89] found id: ""
	I0314 19:27:08.572197  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.572241  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:08.572257  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:08.572275  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:08.588223  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:08.588261  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:08.675851  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:08.675877  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:08.675889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:04.282689  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:06.284132  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.783530  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:07.448817  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:09.944024  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.407848  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:10.408924  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:12.907004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.763975  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:08.764014  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.813516  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:08.813552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.370525  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:11.385556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:11.385645  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:11.426788  992344 cri.go:89] found id: ""
	I0314 19:27:11.426823  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.426831  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:11.426837  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:11.426910  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:11.465752  992344 cri.go:89] found id: ""
	I0314 19:27:11.465786  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.465794  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:11.465801  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:11.465849  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:11.506855  992344 cri.go:89] found id: ""
	I0314 19:27:11.506890  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.506904  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:11.506912  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:11.506973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:11.548844  992344 cri.go:89] found id: ""
	I0314 19:27:11.548880  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.548891  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:11.548900  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:11.548960  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:11.590828  992344 cri.go:89] found id: ""
	I0314 19:27:11.590861  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.590872  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:11.590880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:11.590952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:11.631863  992344 cri.go:89] found id: ""
	I0314 19:27:11.631892  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.631904  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:11.631913  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:11.631975  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:11.670204  992344 cri.go:89] found id: ""
	I0314 19:27:11.670230  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.670238  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:11.670244  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:11.670293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:11.711946  992344 cri.go:89] found id: ""
	I0314 19:27:11.711980  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.711991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:11.712005  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:11.712026  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.766647  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:11.766682  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:11.784449  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:11.784475  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:11.866503  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:11.866536  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:11.866552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:11.952506  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:11.952538  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:10.787653  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:13.282424  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:11.945870  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.444491  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.909465  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.918004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.502903  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:14.518020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:14.518084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:14.555486  992344 cri.go:89] found id: ""
	I0314 19:27:14.555528  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.555541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:14.555552  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:14.555615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:14.592068  992344 cri.go:89] found id: ""
	I0314 19:27:14.592102  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.592113  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:14.592121  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:14.592186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:14.633353  992344 cri.go:89] found id: ""
	I0314 19:27:14.633408  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.633418  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:14.633425  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:14.633490  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:14.670897  992344 cri.go:89] found id: ""
	I0314 19:27:14.670935  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.670947  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:14.670955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:14.671024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:14.713838  992344 cri.go:89] found id: ""
	I0314 19:27:14.713874  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.713884  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:14.713890  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:14.713957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:14.751113  992344 cri.go:89] found id: ""
	I0314 19:27:14.751143  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.751151  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:14.751158  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:14.751209  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:14.792485  992344 cri.go:89] found id: ""
	I0314 19:27:14.792518  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.792535  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:14.792542  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:14.792606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:14.839250  992344 cri.go:89] found id: ""
	I0314 19:27:14.839284  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.839297  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:14.839309  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:14.839325  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:14.880384  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:14.880421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:14.941515  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:14.941549  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:14.958810  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:14.958836  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:15.048586  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:15.048610  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:15.048625  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:17.640280  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:17.655841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:17.655901  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:17.698205  992344 cri.go:89] found id: ""
	I0314 19:27:17.698242  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.698254  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:17.698261  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:17.698315  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:17.740854  992344 cri.go:89] found id: ""
	I0314 19:27:17.740892  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.740903  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:17.740910  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:17.740980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:17.783317  992344 cri.go:89] found id: ""
	I0314 19:27:17.783409  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.783426  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:17.783434  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:17.783499  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:17.823514  992344 cri.go:89] found id: ""
	I0314 19:27:17.823541  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.823550  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:17.823556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:17.823606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:17.859249  992344 cri.go:89] found id: ""
	I0314 19:27:17.859288  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.859301  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:17.859310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:17.859386  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:17.900636  992344 cri.go:89] found id: ""
	I0314 19:27:17.900670  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.900688  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:17.900703  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:17.900770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:17.939927  992344 cri.go:89] found id: ""
	I0314 19:27:17.939959  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.939970  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:17.939979  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:17.940048  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:17.980507  992344 cri.go:89] found id: ""
	I0314 19:27:17.980539  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.980551  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:17.980563  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:17.980580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:18.037887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:18.037925  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:18.054506  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:18.054544  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:18.129987  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:18.130006  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:18.130018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:18.210364  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:18.210400  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:15.282905  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:17.283421  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.943922  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.448400  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.406315  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.407142  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:20.758599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:20.775419  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:20.775480  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:20.814427  992344 cri.go:89] found id: ""
	I0314 19:27:20.814457  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.814469  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:20.814476  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:20.814528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:20.851020  992344 cri.go:89] found id: ""
	I0314 19:27:20.851056  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.851069  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:20.851077  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:20.851150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:20.894746  992344 cri.go:89] found id: ""
	I0314 19:27:20.894775  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.894784  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:20.894790  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:20.894856  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:20.932852  992344 cri.go:89] found id: ""
	I0314 19:27:20.932884  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.932895  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:20.932903  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:20.932962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:20.977294  992344 cri.go:89] found id: ""
	I0314 19:27:20.977329  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.977341  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:20.977349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:20.977417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:21.018980  992344 cri.go:89] found id: ""
	I0314 19:27:21.019016  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.019027  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:21.019036  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:21.019102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:21.058764  992344 cri.go:89] found id: ""
	I0314 19:27:21.058817  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.058832  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:21.058841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:21.058915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:21.098126  992344 cri.go:89] found id: ""
	I0314 19:27:21.098168  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.098181  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:21.098194  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:21.098211  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:21.154456  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:21.154490  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:21.170919  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:21.170950  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:21.247945  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:21.247973  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:21.247991  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:21.345152  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:21.345193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:19.782502  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.783005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.944293  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.945097  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.442970  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.907276  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:25.907425  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:27.907517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.900146  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:23.917834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:23.917896  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:23.959759  992344 cri.go:89] found id: ""
	I0314 19:27:23.959787  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.959800  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:23.959808  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:23.959875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:23.999841  992344 cri.go:89] found id: ""
	I0314 19:27:23.999871  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.999880  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:23.999887  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:23.999942  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:24.044031  992344 cri.go:89] found id: ""
	I0314 19:27:24.044063  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.044072  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:24.044078  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:24.044149  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:24.089895  992344 cri.go:89] found id: ""
	I0314 19:27:24.089931  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.089944  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:24.089955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:24.090023  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:24.131286  992344 cri.go:89] found id: ""
	I0314 19:27:24.131319  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.131331  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:24.131338  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:24.131409  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:24.169376  992344 cri.go:89] found id: ""
	I0314 19:27:24.169408  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.169420  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:24.169428  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:24.169495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:24.215123  992344 cri.go:89] found id: ""
	I0314 19:27:24.215150  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.215159  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:24.215165  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:24.215219  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:24.257440  992344 cri.go:89] found id: ""
	I0314 19:27:24.257476  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.257484  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:24.257494  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:24.257508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.311885  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:24.311916  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:24.326375  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:24.326403  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:24.403176  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:24.403207  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:24.403227  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:24.485890  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:24.485928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.032675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:27.050221  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:27.050310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:27.091708  992344 cri.go:89] found id: ""
	I0314 19:27:27.091739  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.091750  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:27.091761  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:27.091828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:27.135279  992344 cri.go:89] found id: ""
	I0314 19:27:27.135317  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.135329  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:27.135337  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:27.135407  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:27.178163  992344 cri.go:89] found id: ""
	I0314 19:27:27.178194  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.178203  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:27.178209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:27.178259  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:27.220298  992344 cri.go:89] found id: ""
	I0314 19:27:27.220331  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.220341  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:27.220367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:27.220423  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:27.262087  992344 cri.go:89] found id: ""
	I0314 19:27:27.262122  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.262135  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:27.262143  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:27.262305  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:27.304543  992344 cri.go:89] found id: ""
	I0314 19:27:27.304576  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.304587  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:27.304597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:27.304668  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:27.343860  992344 cri.go:89] found id: ""
	I0314 19:27:27.343889  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.343899  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:27.343905  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:27.343974  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:27.383608  992344 cri.go:89] found id: ""
	I0314 19:27:27.383639  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.383649  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:27.383659  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:27.383673  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:27.398443  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:27.398478  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:27.485215  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:27.485240  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:27.485254  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:27.564067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:27.564110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.608472  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:27.608511  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.284517  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.783079  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:28.443938  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.445579  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:29.908018  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.406717  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.169228  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:30.183876  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:30.183952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:30.229357  992344 cri.go:89] found id: ""
	I0314 19:27:30.229390  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.229401  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:30.229407  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:30.229474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:30.272970  992344 cri.go:89] found id: ""
	I0314 19:27:30.273007  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.273021  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:30.273030  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:30.273116  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:30.314939  992344 cri.go:89] found id: ""
	I0314 19:27:30.314968  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.314976  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:30.314982  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:30.315031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:30.350602  992344 cri.go:89] found id: ""
	I0314 19:27:30.350633  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.350644  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:30.350652  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:30.350739  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:30.393907  992344 cri.go:89] found id: ""
	I0314 19:27:30.393939  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.393950  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:30.393958  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:30.394029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:30.431943  992344 cri.go:89] found id: ""
	I0314 19:27:30.431974  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.431983  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:30.431991  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:30.432058  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:30.471873  992344 cri.go:89] found id: ""
	I0314 19:27:30.471900  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.471910  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:30.471918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:30.471981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:30.508842  992344 cri.go:89] found id: ""
	I0314 19:27:30.508865  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.508872  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:30.508882  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:30.508896  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:30.587441  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:30.587471  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:30.587489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:30.670580  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:30.670618  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:30.719846  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:30.719882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:30.779463  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:30.779508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.296251  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:33.311393  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:33.311452  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:33.351846  992344 cri.go:89] found id: ""
	I0314 19:27:33.351879  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.351889  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:33.351898  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:33.351965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:33.396392  992344 cri.go:89] found id: ""
	I0314 19:27:33.396494  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.396523  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:33.396546  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:33.396637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:33.451093  992344 cri.go:89] found id: ""
	I0314 19:27:33.451120  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.451130  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:33.451149  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:33.451225  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:33.511427  992344 cri.go:89] found id: ""
	I0314 19:27:33.511474  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.511487  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:33.511495  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:33.511570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:33.560459  992344 cri.go:89] found id: ""
	I0314 19:27:33.560488  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.560500  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:33.560509  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:33.560579  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:33.601454  992344 cri.go:89] found id: ""
	I0314 19:27:33.601491  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.601503  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:33.601512  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:33.601588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:33.640991  992344 cri.go:89] found id: ""
	I0314 19:27:33.641029  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.641042  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:33.641050  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:33.641115  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:33.684359  992344 cri.go:89] found id: ""
	I0314 19:27:33.684390  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.684398  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:33.684412  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:33.684436  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.699551  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:33.699583  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:27:29.285022  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:31.782413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:33.782490  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.942996  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.943285  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.407243  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.407509  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:27:33.781859  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:33.781893  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:33.781909  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:33.864992  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:33.865036  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:33.911670  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:33.911712  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.466570  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:36.483515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:36.483611  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:36.522488  992344 cri.go:89] found id: ""
	I0314 19:27:36.522521  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.522533  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:36.522549  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:36.522607  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:36.561676  992344 cri.go:89] found id: ""
	I0314 19:27:36.561714  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.561728  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:36.561737  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:36.561810  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:36.604512  992344 cri.go:89] found id: ""
	I0314 19:27:36.604547  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.604559  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:36.604568  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:36.604640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:36.645387  992344 cri.go:89] found id: ""
	I0314 19:27:36.645416  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.645425  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:36.645430  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:36.645495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:36.682951  992344 cri.go:89] found id: ""
	I0314 19:27:36.682976  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.682984  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:36.682989  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:36.683040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:36.725464  992344 cri.go:89] found id: ""
	I0314 19:27:36.725517  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.725530  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:36.725538  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:36.725601  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:36.766542  992344 cri.go:89] found id: ""
	I0314 19:27:36.766578  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.766590  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:36.766598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:36.766663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:36.809745  992344 cri.go:89] found id: ""
	I0314 19:27:36.809773  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.809782  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:36.809791  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:36.809805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.863035  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:36.863069  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:36.877162  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:36.877195  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:36.952727  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:36.952747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:36.952759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:37.035914  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:37.035953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:35.783567  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:37.786255  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.944521  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.445911  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:41.446549  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:38.409392  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:40.914692  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.581600  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:39.595798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:39.595875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:39.635374  992344 cri.go:89] found id: ""
	I0314 19:27:39.635406  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.635418  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:39.635426  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:39.635488  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:39.674527  992344 cri.go:89] found id: ""
	I0314 19:27:39.674560  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.674571  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:39.674579  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:39.674649  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:39.714313  992344 cri.go:89] found id: ""
	I0314 19:27:39.714357  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.714370  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:39.714380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:39.714449  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:39.754346  992344 cri.go:89] found id: ""
	I0314 19:27:39.754383  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.754395  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:39.754402  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:39.754468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:39.799448  992344 cri.go:89] found id: ""
	I0314 19:27:39.799481  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.799493  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:39.799500  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:39.799551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:39.841550  992344 cri.go:89] found id: ""
	I0314 19:27:39.841582  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.841592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:39.841601  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:39.841673  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:39.878581  992344 cri.go:89] found id: ""
	I0314 19:27:39.878612  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.878624  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:39.878630  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:39.878681  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:39.917419  992344 cri.go:89] found id: ""
	I0314 19:27:39.917444  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.917454  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:39.917465  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:39.917480  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:39.976304  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:39.976340  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:39.993786  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:39.993825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:40.074428  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.074458  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:40.074481  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:40.156135  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:40.156177  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:42.700758  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:42.716600  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:42.716672  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:42.763646  992344 cri.go:89] found id: ""
	I0314 19:27:42.763682  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.763694  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:42.763702  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:42.763770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:42.804246  992344 cri.go:89] found id: ""
	I0314 19:27:42.804280  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.804288  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:42.804295  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:42.804360  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:42.847415  992344 cri.go:89] found id: ""
	I0314 19:27:42.847445  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.847455  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:42.847463  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:42.847527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:42.884340  992344 cri.go:89] found id: ""
	I0314 19:27:42.884376  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.884386  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:42.884395  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:42.884464  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:42.923583  992344 cri.go:89] found id: ""
	I0314 19:27:42.923615  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.923634  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:42.923642  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:42.923704  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:42.969164  992344 cri.go:89] found id: ""
	I0314 19:27:42.969195  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.969207  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:42.969215  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:42.969291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:43.013760  992344 cri.go:89] found id: ""
	I0314 19:27:43.013793  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.013802  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:43.013808  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:43.013881  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:43.056930  992344 cri.go:89] found id: ""
	I0314 19:27:43.056964  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.056976  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:43.056989  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:43.057004  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:43.145067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:43.145104  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:43.196679  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:43.196714  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:43.252329  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:43.252363  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:43.268635  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:43.268663  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:43.353391  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.284010  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:42.784684  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.447800  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.943282  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.409130  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.908067  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.853793  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:45.867904  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:45.867971  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:45.909352  992344 cri.go:89] found id: ""
	I0314 19:27:45.909376  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.909387  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:45.909394  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:45.909451  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:45.950885  992344 cri.go:89] found id: ""
	I0314 19:27:45.950920  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.950931  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:45.950939  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:45.951006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:45.987907  992344 cri.go:89] found id: ""
	I0314 19:27:45.987940  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.987951  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:45.987959  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:45.988025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:46.026894  992344 cri.go:89] found id: ""
	I0314 19:27:46.026930  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.026942  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:46.026950  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:46.027047  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:46.074867  992344 cri.go:89] found id: ""
	I0314 19:27:46.074901  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.074911  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:46.074918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:46.074981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:46.111516  992344 cri.go:89] found id: ""
	I0314 19:27:46.111551  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.111562  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:46.111570  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:46.111633  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:46.151560  992344 cri.go:89] found id: ""
	I0314 19:27:46.151590  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.151601  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:46.151610  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:46.151674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:46.191684  992344 cri.go:89] found id: ""
	I0314 19:27:46.191719  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.191730  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:46.191742  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:46.191757  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:46.245152  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:46.245189  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:46.261705  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:46.261741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:46.342381  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:46.342409  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:46.342424  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:46.437995  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:46.438031  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:45.283412  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:47.782838  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.443371  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.446353  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.406887  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.408726  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.410088  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.981814  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:48.998620  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:48.998689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:49.040608  992344 cri.go:89] found id: ""
	I0314 19:27:49.040643  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.040653  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:49.040659  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:49.040711  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:49.083505  992344 cri.go:89] found id: ""
	I0314 19:27:49.083531  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.083539  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:49.083544  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:49.083606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:49.127355  992344 cri.go:89] found id: ""
	I0314 19:27:49.127383  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.127391  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:49.127399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:49.127472  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:49.165694  992344 cri.go:89] found id: ""
	I0314 19:27:49.165726  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.165738  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:49.165746  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:49.165813  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:49.209407  992344 cri.go:89] found id: ""
	I0314 19:27:49.209440  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.209449  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:49.209455  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:49.209516  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:49.250450  992344 cri.go:89] found id: ""
	I0314 19:27:49.250482  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.250493  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:49.250499  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:49.250560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:49.294041  992344 cri.go:89] found id: ""
	I0314 19:27:49.294070  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.294079  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:49.294085  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:49.294150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:49.333664  992344 cri.go:89] found id: ""
	I0314 19:27:49.333706  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.333719  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:49.333731  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:49.333749  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:49.348323  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:49.348351  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:49.428896  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:49.428917  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:49.428929  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:49.510395  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:49.510431  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:49.553630  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:49.553669  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.105763  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:52.120888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:52.120956  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:52.158143  992344 cri.go:89] found id: ""
	I0314 19:27:52.158174  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.158188  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:52.158196  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:52.158271  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:52.198254  992344 cri.go:89] found id: ""
	I0314 19:27:52.198285  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.198294  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:52.198299  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:52.198372  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:52.237973  992344 cri.go:89] found id: ""
	I0314 19:27:52.238001  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.238009  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:52.238015  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:52.238066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:52.283766  992344 cri.go:89] found id: ""
	I0314 19:27:52.283798  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.283809  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:52.283817  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:52.283889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:52.325861  992344 cri.go:89] found id: ""
	I0314 19:27:52.325896  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.325906  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:52.325914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:52.325983  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:52.367582  992344 cri.go:89] found id: ""
	I0314 19:27:52.367612  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.367622  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:52.367631  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:52.367698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:52.405009  992344 cri.go:89] found id: ""
	I0314 19:27:52.405043  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.405054  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:52.405062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:52.405125  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:52.447560  992344 cri.go:89] found id: ""
	I0314 19:27:52.447584  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.447594  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:52.447605  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:52.447620  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:52.519023  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:52.519048  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:52.519062  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:52.603256  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:52.603297  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:52.650926  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:52.650957  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.708743  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:52.708784  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:50.284259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.286540  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.944152  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.446257  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:54.910578  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.407194  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.225549  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:55.242914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:55.242992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:55.284249  992344 cri.go:89] found id: ""
	I0314 19:27:55.284280  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.284291  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:55.284298  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:55.284362  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:55.333784  992344 cri.go:89] found id: ""
	I0314 19:27:55.333821  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.333833  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:55.333840  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:55.333916  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:55.375444  992344 cri.go:89] found id: ""
	I0314 19:27:55.375498  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.375511  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:55.375519  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:55.375598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:55.416225  992344 cri.go:89] found id: ""
	I0314 19:27:55.416259  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.416269  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:55.416276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:55.416340  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:55.461097  992344 cri.go:89] found id: ""
	I0314 19:27:55.461138  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.461150  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:55.461166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:55.461235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:55.504621  992344 cri.go:89] found id: ""
	I0314 19:27:55.504659  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.504670  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:55.504679  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:55.504755  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:55.545075  992344 cri.go:89] found id: ""
	I0314 19:27:55.545111  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.545123  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:55.545130  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:55.545221  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:55.584137  992344 cri.go:89] found id: ""
	I0314 19:27:55.584197  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.584235  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:55.584252  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:55.584274  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:55.642705  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:55.642741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:55.657487  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:55.657516  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:55.738379  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:55.738414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:55.738432  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:55.827582  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:55.827621  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:58.374265  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:58.389764  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:58.389878  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:58.431760  992344 cri.go:89] found id: ""
	I0314 19:27:58.431798  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.431810  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:58.431818  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:58.431880  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:58.471389  992344 cri.go:89] found id: ""
	I0314 19:27:58.471415  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.471424  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:58.471430  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:58.471478  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:58.508875  992344 cri.go:89] found id: ""
	I0314 19:27:58.508903  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.508910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:58.508916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:58.508965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:58.546016  992344 cri.go:89] found id: ""
	I0314 19:27:58.546042  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.546051  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:58.546057  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:58.546106  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:58.586319  992344 cri.go:89] found id: ""
	I0314 19:27:58.586346  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.586354  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:58.586360  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:58.586414  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:58.625381  992344 cri.go:89] found id: ""
	I0314 19:27:58.625411  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.625423  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:58.625431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:58.625494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:58.663016  992344 cri.go:89] found id: ""
	I0314 19:27:58.663047  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.663059  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:58.663068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:58.663131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:58.703100  992344 cri.go:89] found id: ""
	I0314 19:27:58.703144  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.703159  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:58.703172  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:58.703190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:54.782936  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:56.783549  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.783602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.943515  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:00.443787  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:59.908025  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:01.908119  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.755081  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:58.755116  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:58.770547  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:58.770577  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:58.850354  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:58.850379  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:58.850395  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:58.944115  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:58.944152  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.489937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:01.505233  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:01.505309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:01.544381  992344 cri.go:89] found id: ""
	I0314 19:28:01.544417  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.544429  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:01.544437  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:01.544502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:01.582639  992344 cri.go:89] found id: ""
	I0314 19:28:01.582668  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.582676  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:01.582684  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:01.582745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:01.621926  992344 cri.go:89] found id: ""
	I0314 19:28:01.621957  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.621968  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:01.621976  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:01.622040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:01.659749  992344 cri.go:89] found id: ""
	I0314 19:28:01.659779  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.659791  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:01.659798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:01.659869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:01.696467  992344 cri.go:89] found id: ""
	I0314 19:28:01.696497  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.696505  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:01.696511  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:01.696570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:01.735273  992344 cri.go:89] found id: ""
	I0314 19:28:01.735301  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.735310  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:01.735316  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:01.735381  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:01.777051  992344 cri.go:89] found id: ""
	I0314 19:28:01.777081  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.777090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:01.777096  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:01.777155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:01.820851  992344 cri.go:89] found id: ""
	I0314 19:28:01.820883  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.820894  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:01.820911  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:01.820926  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:01.874599  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:01.874632  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:01.888971  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:01.889007  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:01.971786  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:01.971806  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:01.971819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:02.064070  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:02.064114  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.283565  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.782799  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:02.446196  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.944339  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.917838  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:06.409597  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.610064  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:04.625349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:04.625417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:04.664254  992344 cri.go:89] found id: ""
	I0314 19:28:04.664284  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.664293  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:04.664299  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:04.664348  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:04.704466  992344 cri.go:89] found id: ""
	I0314 19:28:04.704502  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.704514  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:04.704523  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:04.704588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:04.745733  992344 cri.go:89] found id: ""
	I0314 19:28:04.745762  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.745773  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:04.745781  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:04.745846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:04.790435  992344 cri.go:89] found id: ""
	I0314 19:28:04.790465  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.790477  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:04.790485  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:04.790550  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:04.829215  992344 cri.go:89] found id: ""
	I0314 19:28:04.829255  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.829268  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:04.829276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:04.829343  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:04.874200  992344 cri.go:89] found id: ""
	I0314 19:28:04.874234  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.874246  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:04.874253  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:04.874318  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:04.915882  992344 cri.go:89] found id: ""
	I0314 19:28:04.915909  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.915920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:04.915928  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:04.915994  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:04.954000  992344 cri.go:89] found id: ""
	I0314 19:28:04.954027  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.954038  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:04.954049  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:04.954063  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:04.996511  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:04.996540  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:05.049608  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:05.049644  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:05.064401  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:05.064437  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:05.145169  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:05.145189  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:05.145202  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:07.734535  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:07.765003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:07.765099  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:07.814489  992344 cri.go:89] found id: ""
	I0314 19:28:07.814518  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.814526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:07.814532  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:07.814595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:07.854337  992344 cri.go:89] found id: ""
	I0314 19:28:07.854368  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.854378  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:07.854384  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:07.854455  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:07.894430  992344 cri.go:89] found id: ""
	I0314 19:28:07.894465  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.894479  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:07.894487  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:07.894551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:07.939473  992344 cri.go:89] found id: ""
	I0314 19:28:07.939504  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.939515  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:07.939524  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:07.939591  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:07.982584  992344 cri.go:89] found id: ""
	I0314 19:28:07.982627  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.982640  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:07.982649  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:07.982710  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:08.020038  992344 cri.go:89] found id: ""
	I0314 19:28:08.020065  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.020074  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:08.020080  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:08.020138  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:08.058377  992344 cri.go:89] found id: ""
	I0314 19:28:08.058412  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.058423  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:08.058431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:08.058509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:08.096241  992344 cri.go:89] found id: ""
	I0314 19:28:08.096273  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.096284  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:08.096294  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:08.096308  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:08.174276  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:08.174315  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:08.221249  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:08.221282  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:08.273899  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:08.273930  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:08.290166  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:08.290193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:08.382154  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:06.283073  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.784385  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:07.447546  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:09.448168  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.906422  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.907030  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.882385  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:10.898126  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:10.898200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:10.939972  992344 cri.go:89] found id: ""
	I0314 19:28:10.940001  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.940012  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:10.940019  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:10.940084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:10.985154  992344 cri.go:89] found id: ""
	I0314 19:28:10.985187  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.985199  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:10.985212  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:10.985278  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:11.023955  992344 cri.go:89] found id: ""
	I0314 19:28:11.024004  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.024017  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:11.024025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:11.024094  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:11.065508  992344 cri.go:89] found id: ""
	I0314 19:28:11.065534  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.065543  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:11.065549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:11.065620  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:11.103903  992344 cri.go:89] found id: ""
	I0314 19:28:11.103930  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.103938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:11.103944  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:11.103997  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:11.146820  992344 cri.go:89] found id: ""
	I0314 19:28:11.146856  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.146866  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:11.146873  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:11.146930  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:11.195840  992344 cri.go:89] found id: ""
	I0314 19:28:11.195871  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.195880  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:11.195888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:11.195957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:11.237594  992344 cri.go:89] found id: ""
	I0314 19:28:11.237628  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.237647  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:11.237658  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:11.237671  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:11.297323  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:11.297356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:11.313785  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:11.313815  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:11.393416  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:11.393444  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:11.393461  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:11.472938  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:11.472972  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:11.283364  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.283657  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:11.945477  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.443000  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.406341  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:15.905918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.907047  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.025870  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:14.039597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:14.039667  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:14.076786  992344 cri.go:89] found id: ""
	I0314 19:28:14.076822  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.076834  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:14.076842  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:14.076911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:14.114754  992344 cri.go:89] found id: ""
	I0314 19:28:14.114796  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.114815  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:14.114823  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:14.114893  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:14.158360  992344 cri.go:89] found id: ""
	I0314 19:28:14.158396  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.158408  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:14.158417  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:14.158489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:14.208587  992344 cri.go:89] found id: ""
	I0314 19:28:14.208626  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.208638  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:14.208646  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:14.208712  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:14.253013  992344 cri.go:89] found id: ""
	I0314 19:28:14.253049  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.253062  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:14.253071  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:14.253142  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:14.313793  992344 cri.go:89] found id: ""
	I0314 19:28:14.313830  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.313843  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:14.313851  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:14.313918  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:14.352044  992344 cri.go:89] found id: ""
	I0314 19:28:14.352076  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.352087  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:14.352094  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:14.352161  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:14.389393  992344 cri.go:89] found id: ""
	I0314 19:28:14.389427  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.389436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:14.389446  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:14.389464  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:14.447873  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:14.447914  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:14.462610  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:14.462636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:14.543393  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:14.543414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:14.543427  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:14.628147  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:14.628190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.177617  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:17.193408  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:17.193481  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:17.233133  992344 cri.go:89] found id: ""
	I0314 19:28:17.233161  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.233170  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:17.233183  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:17.233252  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:17.270429  992344 cri.go:89] found id: ""
	I0314 19:28:17.270459  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.270471  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:17.270479  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:17.270559  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:17.309915  992344 cri.go:89] found id: ""
	I0314 19:28:17.309939  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.309947  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:17.309952  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:17.309999  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:17.347157  992344 cri.go:89] found id: ""
	I0314 19:28:17.347188  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.347199  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:17.347206  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:17.347269  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:17.388837  992344 cri.go:89] found id: ""
	I0314 19:28:17.388866  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.388877  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:17.388884  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:17.388948  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:17.425945  992344 cri.go:89] found id: ""
	I0314 19:28:17.425969  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.425977  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:17.425983  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:17.426051  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:17.470291  992344 cri.go:89] found id: ""
	I0314 19:28:17.470320  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.470356  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:17.470365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:17.470424  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:17.507512  992344 cri.go:89] found id: ""
	I0314 19:28:17.507541  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.507549  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:17.507559  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:17.507575  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.550148  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:17.550186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:17.603728  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:17.603759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:17.619160  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:17.619186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:17.699649  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:17.699683  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:17.699701  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:15.782780  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.783204  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:16.942941  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:18.943420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:21.450329  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.407873  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.905658  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.284486  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:20.300132  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:20.300198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:20.341566  992344 cri.go:89] found id: ""
	I0314 19:28:20.341608  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.341620  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:20.341629  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:20.341700  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:20.379527  992344 cri.go:89] found id: ""
	I0314 19:28:20.379555  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.379562  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:20.379568  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:20.379640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:20.425871  992344 cri.go:89] found id: ""
	I0314 19:28:20.425902  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.425910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:20.425916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:20.425980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:20.464939  992344 cri.go:89] found id: ""
	I0314 19:28:20.464979  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.464993  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:20.465003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:20.465075  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:20.500954  992344 cri.go:89] found id: ""
	I0314 19:28:20.500982  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.500993  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:20.501001  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:20.501063  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:20.542049  992344 cri.go:89] found id: ""
	I0314 19:28:20.542080  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.542090  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:20.542098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:20.542178  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:20.577298  992344 cri.go:89] found id: ""
	I0314 19:28:20.577325  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.577333  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:20.577340  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:20.577389  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.618467  992344 cri.go:89] found id: ""
	I0314 19:28:20.618498  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.618511  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:20.618523  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:20.618537  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:20.694238  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:20.694280  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:20.694298  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:20.778845  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:20.778882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:20.821575  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:20.821606  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:20.876025  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:20.876061  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.391129  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:23.408183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:23.408276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:23.449128  992344 cri.go:89] found id: ""
	I0314 19:28:23.449169  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.449180  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:23.449186  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:23.449276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:23.486168  992344 cri.go:89] found id: ""
	I0314 19:28:23.486201  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.486223  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:23.486242  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:23.486299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:23.525452  992344 cri.go:89] found id: ""
	I0314 19:28:23.525484  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.525492  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:23.525498  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:23.525553  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:23.560947  992344 cri.go:89] found id: ""
	I0314 19:28:23.560982  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.561037  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:23.561054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:23.561121  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:23.607261  992344 cri.go:89] found id: ""
	I0314 19:28:23.607298  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.607310  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:23.607317  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:23.607392  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:23.646849  992344 cri.go:89] found id: ""
	I0314 19:28:23.646881  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.646891  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:23.646896  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:23.646962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:23.684108  992344 cri.go:89] found id: ""
	I0314 19:28:23.684133  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.684140  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:23.684146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:23.684197  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.283546  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.783059  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.942845  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:25.943049  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:24.905817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.908404  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.723284  992344 cri.go:89] found id: ""
	I0314 19:28:23.723320  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.723331  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:23.723343  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:23.723359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:23.785024  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:23.785066  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.801136  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:23.801167  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:23.875721  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:23.875749  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:23.875766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:23.969377  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:23.969420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:26.517771  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:26.533260  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:26.533349  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:26.573712  992344 cri.go:89] found id: ""
	I0314 19:28:26.573750  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.573762  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:26.573770  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:26.573846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:26.610738  992344 cri.go:89] found id: ""
	I0314 19:28:26.610768  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.610777  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:26.610783  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:26.610836  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:26.652014  992344 cri.go:89] found id: ""
	I0314 19:28:26.652041  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.652049  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:26.652054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:26.652109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:26.687344  992344 cri.go:89] found id: ""
	I0314 19:28:26.687377  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.687389  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:26.687398  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:26.687466  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:26.725897  992344 cri.go:89] found id: ""
	I0314 19:28:26.725926  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.725938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:26.725945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:26.726008  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:26.772328  992344 cri.go:89] found id: ""
	I0314 19:28:26.772357  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.772367  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:26.772375  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:26.772440  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:26.814721  992344 cri.go:89] found id: ""
	I0314 19:28:26.814757  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.814768  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:26.814776  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:26.814841  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:26.849726  992344 cri.go:89] found id: ""
	I0314 19:28:26.849763  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.849781  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:26.849794  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:26.849811  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:26.932680  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:26.932709  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:26.932725  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:27.011721  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:27.011787  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:27.059121  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:27.059160  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:27.110392  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:27.110430  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:24.783160  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.783703  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:27.943562  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.943880  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.405973  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.407122  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.625784  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:29.642945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:29.643024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:29.681233  992344 cri.go:89] found id: ""
	I0314 19:28:29.681267  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.681279  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:29.681286  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:29.681351  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:29.729735  992344 cri.go:89] found id: ""
	I0314 19:28:29.729764  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.729773  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:29.729779  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:29.729835  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:29.773873  992344 cri.go:89] found id: ""
	I0314 19:28:29.773902  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.773911  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:29.773918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:29.773973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:29.815982  992344 cri.go:89] found id: ""
	I0314 19:28:29.816009  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.816019  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:29.816025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:29.816102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:29.855295  992344 cri.go:89] found id: ""
	I0314 19:28:29.855328  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.855343  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:29.855349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:29.855404  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:29.893580  992344 cri.go:89] found id: ""
	I0314 19:28:29.893618  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.893630  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:29.893638  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:29.893705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:29.939721  992344 cri.go:89] found id: ""
	I0314 19:28:29.939752  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.939763  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:29.939770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:29.939837  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:29.978277  992344 cri.go:89] found id: ""
	I0314 19:28:29.978315  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.978328  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:29.978347  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:29.978362  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:30.031723  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:30.031761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:30.046940  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:30.046968  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:30.124190  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:30.124226  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:30.124244  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:30.203448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:30.203488  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:32.756750  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:32.772599  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:32.772679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:32.812033  992344 cri.go:89] found id: ""
	I0314 19:28:32.812061  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.812069  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:32.812076  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:32.812165  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:32.855461  992344 cri.go:89] found id: ""
	I0314 19:28:32.855490  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.855501  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:32.855509  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:32.855575  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:32.900644  992344 cri.go:89] found id: ""
	I0314 19:28:32.900675  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.900686  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:32.900694  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:32.900772  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:32.942120  992344 cri.go:89] found id: ""
	I0314 19:28:32.942155  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.942166  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:32.942175  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:32.942238  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:32.981325  992344 cri.go:89] found id: ""
	I0314 19:28:32.981352  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.981360  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:32.981367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:32.981419  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:33.019732  992344 cri.go:89] found id: ""
	I0314 19:28:33.019767  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.019781  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:33.019789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:33.019852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:33.060205  992344 cri.go:89] found id: ""
	I0314 19:28:33.060262  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.060274  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:33.060283  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:33.060350  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:33.100456  992344 cri.go:89] found id: ""
	I0314 19:28:33.100490  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.100517  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:33.100529  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:33.100548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:33.114637  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:33.114668  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:33.186983  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:33.187010  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:33.187024  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:33.268816  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:33.268856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:33.314600  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:33.314634  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:29.282840  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.783516  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:32.443948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:34.942835  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:33.906912  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.908364  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.870832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:35.886088  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:35.886168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:35.929548  992344 cri.go:89] found id: ""
	I0314 19:28:35.929580  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.929590  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:35.929598  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:35.929675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:35.970315  992344 cri.go:89] found id: ""
	I0314 19:28:35.970351  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.970364  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:35.970372  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:35.970438  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:36.010663  992344 cri.go:89] found id: ""
	I0314 19:28:36.010696  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.010716  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:36.010723  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:36.010806  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:36.055521  992344 cri.go:89] found id: ""
	I0314 19:28:36.055558  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.055569  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:36.055578  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:36.055648  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:36.095768  992344 cri.go:89] found id: ""
	I0314 19:28:36.095799  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.095810  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:36.095821  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:36.095875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:36.132820  992344 cri.go:89] found id: ""
	I0314 19:28:36.132848  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.132856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:36.132861  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:36.132915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:36.173162  992344 cri.go:89] found id: ""
	I0314 19:28:36.173196  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.173209  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:36.173217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:36.173287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:36.211796  992344 cri.go:89] found id: ""
	I0314 19:28:36.211822  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.211830  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:36.211839  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:36.211854  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:36.271494  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:36.271536  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:36.289341  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:36.289366  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:36.368331  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:36.368361  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:36.368378  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:36.448945  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:36.448993  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:34.283005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.286755  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.781678  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.943412  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:39.450650  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.407910  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:40.409015  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:42.906420  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.995675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:39.009626  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:39.009705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:39.051085  992344 cri.go:89] found id: ""
	I0314 19:28:39.051119  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.051128  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:39.051134  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:39.051184  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:39.090167  992344 cri.go:89] found id: ""
	I0314 19:28:39.090201  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.090214  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:39.090221  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:39.090293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:39.129345  992344 cri.go:89] found id: ""
	I0314 19:28:39.129388  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.129404  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:39.129411  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:39.129475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:39.166678  992344 cri.go:89] found id: ""
	I0314 19:28:39.166731  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.166741  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:39.166750  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:39.166822  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:39.206329  992344 cri.go:89] found id: ""
	I0314 19:28:39.206368  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.206381  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:39.206389  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:39.206442  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:39.245158  992344 cri.go:89] found id: ""
	I0314 19:28:39.245187  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.245196  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:39.245202  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:39.245253  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:39.289207  992344 cri.go:89] found id: ""
	I0314 19:28:39.289243  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.289259  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:39.289267  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:39.289335  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:39.327437  992344 cri.go:89] found id: ""
	I0314 19:28:39.327462  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.327472  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:39.327484  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:39.327500  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:39.381681  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:39.381724  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:39.397060  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:39.397097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:39.482718  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:39.482744  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:39.482761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:39.566304  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:39.566349  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.111937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:42.126968  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:42.127033  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:42.168671  992344 cri.go:89] found id: ""
	I0314 19:28:42.168701  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.168713  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:42.168721  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:42.168792  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:42.213285  992344 cri.go:89] found id: ""
	I0314 19:28:42.213311  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.213319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:42.213325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:42.213388  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:42.255036  992344 cri.go:89] found id: ""
	I0314 19:28:42.255075  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.255085  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:42.255090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:42.255159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:42.296863  992344 cri.go:89] found id: ""
	I0314 19:28:42.296896  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.296907  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:42.296915  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:42.296978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:42.338346  992344 cri.go:89] found id: ""
	I0314 19:28:42.338402  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.338413  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:42.338421  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:42.338489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:42.374667  992344 cri.go:89] found id: ""
	I0314 19:28:42.374691  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.374699  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:42.374711  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:42.374774  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:42.412676  992344 cri.go:89] found id: ""
	I0314 19:28:42.412702  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.412713  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:42.412721  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:42.412786  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:42.451093  992344 cri.go:89] found id: ""
	I0314 19:28:42.451125  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.451135  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:42.451147  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:42.451162  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:42.531130  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:42.531176  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.576583  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:42.576623  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:42.633675  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:42.633715  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:42.650154  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:42.650188  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:42.731282  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:41.282876  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.283770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:41.942349  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.942831  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.943723  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:44.907134  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:46.907817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.231813  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:45.246939  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:45.247029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:45.289033  992344 cri.go:89] found id: ""
	I0314 19:28:45.289057  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.289066  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:45.289071  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:45.289128  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:45.327007  992344 cri.go:89] found id: ""
	I0314 19:28:45.327034  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.327043  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:45.327048  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:45.327109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:45.363725  992344 cri.go:89] found id: ""
	I0314 19:28:45.363757  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.363770  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:45.363778  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:45.363833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:45.400775  992344 cri.go:89] found id: ""
	I0314 19:28:45.400808  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.400819  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:45.400826  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:45.400887  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:45.438717  992344 cri.go:89] found id: ""
	I0314 19:28:45.438750  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.438762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:45.438770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:45.438833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:45.483296  992344 cri.go:89] found id: ""
	I0314 19:28:45.483334  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.483349  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:45.483355  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:45.483406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:45.519840  992344 cri.go:89] found id: ""
	I0314 19:28:45.519872  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.519881  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:45.519887  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:45.519939  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:45.560535  992344 cri.go:89] found id: ""
	I0314 19:28:45.560565  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.560577  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:45.560590  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:45.560613  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:45.639453  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:45.639476  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:45.639489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:45.724224  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:45.724265  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:45.768456  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:45.768494  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:45.828111  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:45.828154  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.345352  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:48.358823  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:48.358879  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:48.401545  992344 cri.go:89] found id: ""
	I0314 19:28:48.401575  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.401586  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:48.401595  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:48.401655  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:48.442031  992344 cri.go:89] found id: ""
	I0314 19:28:48.442062  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.442073  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:48.442081  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:48.442186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:48.481192  992344 cri.go:89] found id: ""
	I0314 19:28:48.481230  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.481239  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:48.481245  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:48.481309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:48.522127  992344 cri.go:89] found id: ""
	I0314 19:28:48.522162  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.522171  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:48.522177  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:48.522233  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:48.562763  992344 cri.go:89] found id: ""
	I0314 19:28:48.562791  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.562800  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:48.562806  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:48.562866  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:48.606256  992344 cri.go:89] found id: ""
	I0314 19:28:48.606290  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.606300  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:48.606309  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:48.606376  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:48.645493  992344 cri.go:89] found id: ""
	I0314 19:28:48.645527  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.645539  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:48.645547  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:48.645634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:48.686145  992344 cri.go:89] found id: ""
	I0314 19:28:48.686177  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.686189  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:48.686202  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:48.686229  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.701771  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:48.701812  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:28:45.784389  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.283921  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.443564  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.445062  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.909434  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.910456  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:28:48.783905  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:48.783931  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:48.783947  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:48.863824  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:48.863868  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:48.919421  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:48.919456  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:51.491562  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:51.507427  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:51.507494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:51.549290  992344 cri.go:89] found id: ""
	I0314 19:28:51.549325  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.549337  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:51.549344  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:51.549415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:51.587540  992344 cri.go:89] found id: ""
	I0314 19:28:51.587575  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.587588  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:51.587595  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:51.587663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:51.629187  992344 cri.go:89] found id: ""
	I0314 19:28:51.629221  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.629229  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:51.629235  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:51.629299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:51.670884  992344 cri.go:89] found id: ""
	I0314 19:28:51.670913  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.670921  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:51.670927  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:51.670978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:51.712751  992344 cri.go:89] found id: ""
	I0314 19:28:51.712783  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.712794  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:51.712802  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:51.712873  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:51.751462  992344 cri.go:89] found id: ""
	I0314 19:28:51.751490  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.751499  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:51.751505  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:51.751572  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:51.793049  992344 cri.go:89] found id: ""
	I0314 19:28:51.793079  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.793090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:51.793098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:51.793166  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:51.834793  992344 cri.go:89] found id: ""
	I0314 19:28:51.834825  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.834837  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:51.834850  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:51.834871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:51.851743  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:51.851792  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:51.927748  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:51.927768  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:51.927780  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:52.011674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:52.011718  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:52.067015  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:52.067059  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:50.783127  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.783450  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.942964  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:54.945540  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:53.407301  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:55.907357  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:56.900342  992056 pod_ready.go:81] duration metric: took 4m0.000959023s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	E0314 19:28:56.900373  992056 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:28:56.900392  992056 pod_ready.go:38] duration metric: took 4m15.050031566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:28:56.900431  992056 kubeadm.go:591] duration metric: took 4m22.457881244s to restartPrimaryControlPlane
	W0314 19:28:56.900513  992056 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:28:56.900549  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:28:54.623820  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:54.641380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:54.641459  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:54.699381  992344 cri.go:89] found id: ""
	I0314 19:28:54.699418  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.699430  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:54.699439  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:54.699507  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:54.752793  992344 cri.go:89] found id: ""
	I0314 19:28:54.752843  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.752865  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:54.752873  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:54.752980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:54.805116  992344 cri.go:89] found id: ""
	I0314 19:28:54.805148  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.805158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:54.805166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:54.805231  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:54.843303  992344 cri.go:89] found id: ""
	I0314 19:28:54.843336  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.843346  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:54.843352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:54.843406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:54.879789  992344 cri.go:89] found id: ""
	I0314 19:28:54.879822  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.879834  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:54.879840  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:54.879911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:54.921874  992344 cri.go:89] found id: ""
	I0314 19:28:54.921903  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.921913  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:54.921921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:54.922005  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:54.966098  992344 cri.go:89] found id: ""
	I0314 19:28:54.966129  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.966137  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:54.966146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:54.966201  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:55.005963  992344 cri.go:89] found id: ""
	I0314 19:28:55.005995  992344 logs.go:276] 0 containers: []
	W0314 19:28:55.006006  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:55.006019  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:55.006035  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:55.063802  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:55.063838  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:55.079126  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:55.079157  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:55.156174  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:55.156200  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:55.156241  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:55.237471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:55.237517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:57.786574  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:57.804359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:57.804446  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:57.843520  992344 cri.go:89] found id: ""
	I0314 19:28:57.843554  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.843566  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:57.843574  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:57.843642  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:57.883350  992344 cri.go:89] found id: ""
	I0314 19:28:57.883385  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.883398  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:57.883408  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:57.883502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:57.926544  992344 cri.go:89] found id: ""
	I0314 19:28:57.926578  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.926589  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:57.926597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:57.926674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:57.969832  992344 cri.go:89] found id: ""
	I0314 19:28:57.969861  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.969873  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:57.969880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:57.969951  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:58.021915  992344 cri.go:89] found id: ""
	I0314 19:28:58.021952  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.021964  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:58.021972  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:58.022043  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:58.068004  992344 cri.go:89] found id: ""
	I0314 19:28:58.068045  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.068059  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:58.068067  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:58.068147  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:58.109350  992344 cri.go:89] found id: ""
	I0314 19:28:58.109385  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.109397  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:58.109405  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:58.109474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:58.149505  992344 cri.go:89] found id: ""
	I0314 19:28:58.149600  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.149617  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:58.149631  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:58.149648  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:58.165051  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:58.165097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:58.260306  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:58.260334  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:58.260360  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:58.347229  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:58.347270  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:58.394506  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:58.394546  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:54.783620  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.282809  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.444954  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:59.450968  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.452967  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:00.965332  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:00.982169  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:29:00.982254  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:29:01.023125  992344 cri.go:89] found id: ""
	I0314 19:29:01.023161  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.023174  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:29:01.023182  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:29:01.023258  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:29:01.073622  992344 cri.go:89] found id: ""
	I0314 19:29:01.073663  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.073688  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:29:01.073697  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:29:01.073762  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:29:01.128431  992344 cri.go:89] found id: ""
	I0314 19:29:01.128459  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.128468  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:29:01.128474  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:29:01.128538  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:29:01.175167  992344 cri.go:89] found id: ""
	I0314 19:29:01.175196  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.175214  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:29:01.175222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:29:01.175287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:29:01.219999  992344 cri.go:89] found id: ""
	I0314 19:29:01.220030  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.220041  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:29:01.220049  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:29:01.220114  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:29:01.267917  992344 cri.go:89] found id: ""
	I0314 19:29:01.267946  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.267954  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:29:01.267961  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:29:01.268010  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:29:01.308402  992344 cri.go:89] found id: ""
	I0314 19:29:01.308437  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.308450  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:29:01.308457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:29:01.308527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:29:01.354953  992344 cri.go:89] found id: ""
	I0314 19:29:01.354982  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.354991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:29:01.355001  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:29:01.355016  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:29:01.409088  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:29:01.409131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:29:01.424936  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:29:01.424965  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:29:01.517636  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:29:01.517673  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:29:01.517691  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:29:01.632674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:29:01.632731  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:59.284185  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.783757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:03.943195  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:05.943902  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:04.185418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:04.199946  992344 kubeadm.go:591] duration metric: took 4m3.891459486s to restartPrimaryControlPlane
	W0314 19:29:04.200023  992344 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:04.200050  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:05.838695  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.638615727s)
	I0314 19:29:05.838799  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:05.858457  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:05.870547  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:05.881784  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:05.881805  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:05.881853  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:05.892847  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:05.892892  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:05.904430  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:05.914971  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:05.915037  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:05.925984  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.935559  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:05.935615  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.947405  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:05.958132  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:05.958177  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:05.968975  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:06.219425  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:04.283772  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:06.785802  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:07.950776  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:10.445766  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:09.282584  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:11.783655  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:12.942948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.944204  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.282089  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:16.282920  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:18.283071  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:17.446142  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:19.447118  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:21.447921  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:20.284298  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:22.782760  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:23.452826  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:25.944013  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:24.785109  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:27.282770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:28.443907  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:30.447194  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:29.271454  992056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.370871229s)
	I0314 19:29:29.271543  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:29.288947  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:29.299822  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:29.309955  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:29.309972  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:29.310004  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:29.320229  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:29.320285  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:29.331509  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:29.342985  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:29.343046  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:29.352805  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.363317  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:29.363376  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.374226  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:29.384400  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:29.384444  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:29.394962  992056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:29.631020  992056 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:29.283297  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:31.782029  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:33.783415  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:32.447974  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:34.943668  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:35.786587  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.282404  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.891396  992056 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:29:38.891457  992056 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:29:38.891550  992056 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:29:38.891703  992056 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:29:38.891857  992056 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:29:38.891965  992056 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:29:38.893298  992056 out.go:204]   - Generating certificates and keys ...
	I0314 19:29:38.893420  992056 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:29:38.893526  992056 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:29:38.893637  992056 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:29:38.893727  992056 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:29:38.893833  992056 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:29:38.893931  992056 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:29:38.894042  992056 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:29:38.894147  992056 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:29:38.894249  992056 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:29:38.894351  992056 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:29:38.894413  992056 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:29:38.894483  992056 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:29:38.894564  992056 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:29:38.894648  992056 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:29:38.894740  992056 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:29:38.894825  992056 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:29:38.894942  992056 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:29:38.895027  992056 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:29:38.896425  992056 out.go:204]   - Booting up control plane ...
	I0314 19:29:38.896545  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:29:38.896665  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:29:38.896773  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:29:38.896879  992056 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:29:38.896980  992056 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:29:38.897045  992056 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:29:38.897200  992056 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:29:38.897278  992056 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.504738 seconds
	I0314 19:29:38.897390  992056 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:29:38.897574  992056 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:29:38.897680  992056 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:29:38.897920  992056 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-992669 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:29:38.897993  992056 kubeadm.go:309] [bootstrap-token] Using token: wr0inu.l2vxagywmdawjzpm
	I0314 19:29:38.899387  992056 out.go:204]   - Configuring RBAC rules ...
	I0314 19:29:38.899518  992056 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:29:38.899597  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:29:38.899790  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:29:38.899950  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:29:38.900097  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:29:38.900225  992056 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:29:38.900389  992056 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:29:38.900449  992056 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:29:38.900514  992056 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:29:38.900523  992056 kubeadm.go:309] 
	I0314 19:29:38.900615  992056 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:29:38.900638  992056 kubeadm.go:309] 
	I0314 19:29:38.900743  992056 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:29:38.900753  992056 kubeadm.go:309] 
	I0314 19:29:38.900788  992056 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:29:38.900872  992056 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:29:38.900945  992056 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:29:38.900954  992056 kubeadm.go:309] 
	I0314 19:29:38.901031  992056 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:29:38.901042  992056 kubeadm.go:309] 
	I0314 19:29:38.901111  992056 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:29:38.901124  992056 kubeadm.go:309] 
	I0314 19:29:38.901202  992056 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:29:38.901312  992056 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:29:38.901433  992056 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:29:38.901446  992056 kubeadm.go:309] 
	I0314 19:29:38.901523  992056 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:29:38.901614  992056 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:29:38.901624  992056 kubeadm.go:309] 
	I0314 19:29:38.901743  992056 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.901842  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:29:38.901862  992056 kubeadm.go:309] 	--control-plane 
	I0314 19:29:38.901865  992056 kubeadm.go:309] 
	I0314 19:29:38.901933  992056 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:29:38.901943  992056 kubeadm.go:309] 
	I0314 19:29:38.902025  992056 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.902185  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:29:38.902212  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:29:38.902222  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:29:38.903643  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:29:36.944055  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.945642  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:39.437026  992563 pod_ready.go:81] duration metric: took 4m0.000967236s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	E0314 19:29:39.437057  992563 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:29:39.437072  992563 pod_ready.go:38] duration metric: took 4m7.55729252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:39.437098  992563 kubeadm.go:591] duration metric: took 4m15.521374831s to restartPrimaryControlPlane
	W0314 19:29:39.437168  992563 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:39.437200  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:38.904945  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:29:38.921860  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:29:38.958963  992056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:29:38.959064  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:38.959065  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-992669 minikube.k8s.io/updated_at=2024_03_14T19_29_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=embed-certs-992669 minikube.k8s.io/primary=true
	I0314 19:29:39.310627  992056 ops.go:34] apiserver oom_adj: -16
	I0314 19:29:39.310807  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:39.811730  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.311090  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.811674  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.311488  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.811640  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.310976  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.811336  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.283716  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:42.784841  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:43.311472  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:43.811668  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.311072  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.811108  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.311743  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.811197  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.311720  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.810955  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.311810  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.811633  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.282898  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:47.786855  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:48.310845  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:48.811747  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.310862  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.811100  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.311383  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.811660  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.311496  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.565143  992056 kubeadm.go:1106] duration metric: took 12.606155275s to wait for elevateKubeSystemPrivileges
	W0314 19:29:51.565200  992056 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:29:51.565210  992056 kubeadm.go:393] duration metric: took 5m17.173193727s to StartCluster
	I0314 19:29:51.565243  992056 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.565344  992056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:29:51.567430  992056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.567800  992056 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:29:51.570366  992056 out.go:177] * Verifying Kubernetes components...
	I0314 19:29:51.567870  992056 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:29:51.568004  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:29:51.571834  992056 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-992669"
	I0314 19:29:51.571847  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:29:51.571872  992056 addons.go:69] Setting metrics-server=true in profile "embed-certs-992669"
	I0314 19:29:51.571922  992056 addons.go:234] Setting addon metrics-server=true in "embed-certs-992669"
	W0314 19:29:51.571942  992056 addons.go:243] addon metrics-server should already be in state true
	I0314 19:29:51.571981  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571884  992056 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-992669"
	W0314 19:29:51.572025  992056 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:29:51.572056  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571842  992056 addons.go:69] Setting default-storageclass=true in profile "embed-certs-992669"
	I0314 19:29:51.572143  992056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-992669"
	I0314 19:29:51.572563  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572578  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572597  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572567  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572611  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572665  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.595116  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0314 19:29:51.595142  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0314 19:29:51.595156  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
	I0314 19:29:51.595736  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595837  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595892  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.596363  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596382  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596560  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596545  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596788  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.596895  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597022  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597213  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.597463  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597488  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597536  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.597498  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.601587  992056 addons.go:234] Setting addon default-storageclass=true in "embed-certs-992669"
	W0314 19:29:51.601612  992056 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:29:51.601644  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.602034  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.602069  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.613696  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I0314 19:29:51.614277  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.614924  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.614957  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.615340  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.615518  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.616192  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0314 19:29:51.616643  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.617453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.617661  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.617680  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.619738  992056 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:29:51.618228  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.621267  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:29:51.621284  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:29:51.621299  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.619984  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.622057  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I0314 19:29:51.622533  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.623169  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.623184  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.623511  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.623600  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.625179  992056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:29:51.625183  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.627022  992056 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:51.627052  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:29:51.627074  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.624457  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.627169  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.625872  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.627276  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.626052  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.627505  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.628272  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.628593  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.630213  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630764  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.630788  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630870  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.631065  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.631483  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.631681  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.645022  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0314 19:29:51.645562  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.646147  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.646172  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.646551  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.646766  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.648424  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.648674  992056 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:51.648690  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:29:51.648702  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.651513  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652188  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.652197  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.652220  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652395  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.652552  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.652655  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.845568  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:29:51.865551  992056 node_ready.go:35] waiting up to 6m0s for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875093  992056 node_ready.go:49] node "embed-certs-992669" has status "Ready":"True"
	I0314 19:29:51.875111  992056 node_ready.go:38] duration metric: took 9.53464ms for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875123  992056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:51.883535  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:51.979907  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:52.034281  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:29:52.034312  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:29:52.060831  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:52.124847  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:29:52.124885  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:29:52.289209  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:52.289239  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:29:52.374833  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:50.286539  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:52.298408  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:53.393013  992056 pod_ready.go:92] pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.393048  992056 pod_ready.go:81] duration metric: took 1.509482935s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.393060  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401449  992056 pod_ready.go:92] pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.401476  992056 pod_ready.go:81] duration metric: took 8.407286ms for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401486  992056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406465  992056 pod_ready.go:92] pod "etcd-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.406492  992056 pod_ready.go:81] duration metric: took 4.997468ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406502  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412923  992056 pod_ready.go:92] pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.412954  992056 pod_ready.go:81] duration metric: took 6.441869ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412966  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469519  992056 pod_ready.go:92] pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.469552  992056 pod_ready.go:81] duration metric: took 56.57628ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469566  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.582001  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.602041099s)
	I0314 19:29:53.582078  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582096  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.582484  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582500  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582521  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582795  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582813  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582853  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.590184  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.590202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.590451  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.590487  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.886717  992056 pod_ready.go:92] pod "kube-proxy-hzhsp" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.886741  992056 pod_ready.go:81] duration metric: took 417.167569ms for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.886751  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.965815  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.904943117s)
	I0314 19:29:53.965875  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.965887  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.966214  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.966240  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.966239  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.966249  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.966305  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.967958  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.968169  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.968187  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.996956  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.622074464s)
	I0314 19:29:53.997019  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997033  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997356  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.997378  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997400  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997415  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997427  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997740  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997758  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997771  992056 addons.go:470] Verifying addon metrics-server=true in "embed-certs-992669"
	I0314 19:29:53.999390  992056 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:29:54.000743  992056 addons.go:505] duration metric: took 2.432877042s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:29:54.270407  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:54.270432  992056 pod_ready.go:81] duration metric: took 383.674695ms for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:54.270440  992056 pod_ready.go:38] duration metric: took 2.395303637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:54.270455  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:29:54.270521  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:54.293083  992056 api_server.go:72] duration metric: took 2.725234796s to wait for apiserver process to appear ...
	I0314 19:29:54.293113  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:29:54.293164  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:29:54.302466  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:29:54.304317  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:29:54.304342  992056 api_server.go:131] duration metric: took 11.220873ms to wait for apiserver health ...
	I0314 19:29:54.304353  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:29:54.479241  992056 system_pods.go:59] 9 kube-system pods found
	I0314 19:29:54.479276  992056 system_pods.go:61] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.479282  992056 system_pods.go:61] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.479288  992056 system_pods.go:61] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.479294  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.479299  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.479305  992056 system_pods.go:61] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.479310  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.479318  992056 system_pods.go:61] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.479325  992056 system_pods.go:61] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.479340  992056 system_pods.go:74] duration metric: took 174.978725ms to wait for pod list to return data ...
	I0314 19:29:54.479358  992056 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:29:54.668682  992056 default_sa.go:45] found service account: "default"
	I0314 19:29:54.668714  992056 default_sa.go:55] duration metric: took 189.346747ms for default service account to be created ...
	I0314 19:29:54.668727  992056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:29:54.873128  992056 system_pods.go:86] 9 kube-system pods found
	I0314 19:29:54.873161  992056 system_pods.go:89] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.873169  992056 system_pods.go:89] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.873175  992056 system_pods.go:89] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.873184  992056 system_pods.go:89] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.873189  992056 system_pods.go:89] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.873194  992056 system_pods.go:89] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.873199  992056 system_pods.go:89] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.873211  992056 system_pods.go:89] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.873222  992056 system_pods.go:89] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.873244  992056 system_pods.go:126] duration metric: took 204.509108ms to wait for k8s-apps to be running ...
	I0314 19:29:54.873256  992056 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:29:54.873311  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:54.890288  992056 system_svc.go:56] duration metric: took 17.021036ms WaitForService to wait for kubelet
	I0314 19:29:54.890320  992056 kubeadm.go:576] duration metric: took 3.322477642s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:29:54.890347  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:29:55.069429  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:29:55.069458  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:29:55.069506  992056 node_conditions.go:105] duration metric: took 179.148222ms to run NodePressure ...
	I0314 19:29:55.069521  992056 start.go:240] waiting for startup goroutines ...
	I0314 19:29:55.069529  992056 start.go:245] waiting for cluster config update ...
	I0314 19:29:55.069543  992056 start.go:254] writing updated cluster config ...
	I0314 19:29:55.069881  992056 ssh_runner.go:195] Run: rm -f paused
	I0314 19:29:55.129829  992056 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:29:55.131816  992056 out.go:177] * Done! kubectl is now configured to use "embed-certs-992669" cluster and "default" namespace by default
	I0314 19:29:54.784171  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:57.281882  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:59.282486  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:01.282694  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:03.782088  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:05.785281  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:08.282878  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:10.782495  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:12.785319  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:11.911432  992563 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.474198952s)
	I0314 19:30:11.911536  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:11.930130  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:30:11.942380  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:30:11.954695  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:30:11.954724  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:30:11.954795  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:30:11.966696  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:30:11.966772  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:30:11.980074  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:30:11.991635  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:30:11.991728  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:30:12.004984  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.016196  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:30:12.016271  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.027974  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:30:12.039057  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:30:12.039110  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:30:12.050231  992563 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:30:12.272978  992563 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:30:15.284464  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:16.784336  991880 pod_ready.go:81] duration metric: took 4m0.008931629s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:30:16.784369  991880 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 19:30:16.784378  991880 pod_ready.go:38] duration metric: took 4m4.558023355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:16.784398  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:16.784436  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:16.784511  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:16.853550  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:16.853582  991880 cri.go:89] found id: ""
	I0314 19:30:16.853592  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:16.853657  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.858963  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:16.859036  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:16.920573  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:16.920607  991880 cri.go:89] found id: ""
	I0314 19:30:16.920618  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:16.920686  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.926133  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:16.926193  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:16.972150  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:16.972184  991880 cri.go:89] found id: ""
	I0314 19:30:16.972192  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:16.972276  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.979169  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:16.979247  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:17.028161  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:17.028191  991880 cri.go:89] found id: ""
	I0314 19:30:17.028202  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:17.028290  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.034573  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:17.034644  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:17.081031  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.081057  991880 cri.go:89] found id: ""
	I0314 19:30:17.081067  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:17.081132  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.086182  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:17.086254  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:17.124758  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.124793  991880 cri.go:89] found id: ""
	I0314 19:30:17.124804  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:17.124892  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.130576  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:17.130636  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:17.180055  991880 cri.go:89] found id: ""
	I0314 19:30:17.180088  991880 logs.go:276] 0 containers: []
	W0314 19:30:17.180100  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:17.180107  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:17.180174  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:17.227751  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.227785  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.227790  991880 cri.go:89] found id: ""
	I0314 19:30:17.227800  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:17.227859  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.232614  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.237357  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:17.237385  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.300841  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:17.300884  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.363775  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:17.363812  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.419276  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:17.419328  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.461722  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:17.461764  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:17.519105  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:17.519147  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:17.535065  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:17.535099  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:17.588603  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:17.588642  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:17.641770  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:17.641803  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:18.180497  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:18.180561  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:18.250700  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:18.250736  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:18.422627  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:18.422668  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:18.484021  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:18.484059  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.765051  992563 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:30:21.765146  992563 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:30:21.765261  992563 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:30:21.765420  992563 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:30:21.765550  992563 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:30:21.765636  992563 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:30:21.767199  992563 out.go:204]   - Generating certificates and keys ...
	I0314 19:30:21.767291  992563 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:30:21.767371  992563 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:30:21.767473  992563 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:30:21.767548  992563 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:30:21.767636  992563 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:30:21.767703  992563 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:30:21.767787  992563 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:30:21.767864  992563 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:30:21.767957  992563 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:30:21.768051  992563 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:30:21.768098  992563 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:30:21.768170  992563 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:30:21.768260  992563 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:30:21.768327  992563 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:30:21.768407  992563 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:30:21.768487  992563 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:30:21.768592  992563 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:30:21.768685  992563 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:30:21.769989  992563 out.go:204]   - Booting up control plane ...
	I0314 19:30:21.770111  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:30:21.770213  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:30:21.770295  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:30:21.770428  992563 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:30:21.770580  992563 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:30:21.770654  992563 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:30:21.770844  992563 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:30:21.770958  992563 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503303 seconds
	I0314 19:30:21.771087  992563 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:30:21.771238  992563 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:30:21.771320  992563 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:30:21.771547  992563 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-440341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:30:21.771634  992563 kubeadm.go:309] [bootstrap-token] Using token: tk83yg.fwvaicx2eo9i68ac
	I0314 19:30:21.773288  992563 out.go:204]   - Configuring RBAC rules ...
	I0314 19:30:21.773428  992563 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:30:21.773532  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:30:21.773732  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:30:21.773914  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:30:21.774068  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:30:21.774180  992563 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:30:21.774338  992563 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:30:21.774402  992563 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:30:21.774464  992563 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:30:21.774472  992563 kubeadm.go:309] 
	I0314 19:30:21.774577  992563 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:30:21.774601  992563 kubeadm.go:309] 
	I0314 19:30:21.774705  992563 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:30:21.774715  992563 kubeadm.go:309] 
	I0314 19:30:21.774744  992563 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:30:21.774833  992563 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:30:21.774914  992563 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:30:21.774930  992563 kubeadm.go:309] 
	I0314 19:30:21.775008  992563 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:30:21.775033  992563 kubeadm.go:309] 
	I0314 19:30:21.775102  992563 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:30:21.775112  992563 kubeadm.go:309] 
	I0314 19:30:21.775191  992563 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:30:21.775311  992563 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:30:21.775407  992563 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:30:21.775416  992563 kubeadm.go:309] 
	I0314 19:30:21.775537  992563 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:30:21.775654  992563 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:30:21.775664  992563 kubeadm.go:309] 
	I0314 19:30:21.775774  992563 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.775940  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:30:21.775971  992563 kubeadm.go:309] 	--control-plane 
	I0314 19:30:21.775977  992563 kubeadm.go:309] 
	I0314 19:30:21.776088  992563 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:30:21.776096  992563 kubeadm.go:309] 
	I0314 19:30:21.776235  992563 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.776419  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:30:21.776441  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:30:21.776451  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:30:21.778042  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:30:21.037583  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:21.055977  991880 api_server.go:72] duration metric: took 4m16.560286182s to wait for apiserver process to appear ...
	I0314 19:30:21.056002  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:21.056039  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:21.056088  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:21.101556  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:21.101583  991880 cri.go:89] found id: ""
	I0314 19:30:21.101591  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:21.101640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.107192  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:21.107259  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:21.156580  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:21.156608  991880 cri.go:89] found id: ""
	I0314 19:30:21.156619  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:21.156681  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.162119  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:21.162277  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:21.204270  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:21.204295  991880 cri.go:89] found id: ""
	I0314 19:30:21.204304  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:21.204369  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.208987  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:21.209057  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:21.258998  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.259019  991880 cri.go:89] found id: ""
	I0314 19:30:21.259029  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:21.259094  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.264179  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:21.264264  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:21.314180  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.314213  991880 cri.go:89] found id: ""
	I0314 19:30:21.314225  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:21.314293  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.319693  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:21.319758  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:21.364936  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.364974  991880 cri.go:89] found id: ""
	I0314 19:30:21.364987  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:21.365061  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.370463  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:21.370531  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:21.411930  991880 cri.go:89] found id: ""
	I0314 19:30:21.411963  991880 logs.go:276] 0 containers: []
	W0314 19:30:21.411974  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:21.411982  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:21.412053  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:21.467849  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:21.467875  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.467881  991880 cri.go:89] found id: ""
	I0314 19:30:21.467891  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:21.467954  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.474463  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.480322  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:21.480351  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.532746  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:21.532778  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.599065  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:21.599115  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.655522  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:21.655563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:22.097480  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:22.097521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:22.154520  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:22.154563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:22.175274  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:22.175312  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:22.302831  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:22.302865  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:22.353974  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:22.354017  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:22.392220  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:22.392263  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:22.433863  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:22.433893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:22.474014  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:22.474047  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:22.522023  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:22.522056  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:21.779300  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:30:21.841175  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:30:21.937053  992563 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:30:21.937114  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:21.937131  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-440341 minikube.k8s.io/updated_at=2024_03_14T19_30_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=default-k8s-diff-port-440341 minikube.k8s.io/primary=true
	I0314 19:30:22.169862  992563 ops.go:34] apiserver oom_adj: -16
	I0314 19:30:22.169890  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:22.670591  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.170361  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.670786  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.170313  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.670779  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.169961  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.670821  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:26.170263  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.081288  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:30:25.086127  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:30:25.087542  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:30:25.087569  991880 api_server.go:131] duration metric: took 4.031556019s to wait for apiserver health ...
	I0314 19:30:25.087578  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:25.087598  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:25.087646  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:25.136716  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.136743  991880 cri.go:89] found id: ""
	I0314 19:30:25.136754  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:25.136818  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.142319  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:25.142382  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:25.188007  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.188031  991880 cri.go:89] found id: ""
	I0314 19:30:25.188040  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:25.188098  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.192982  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:25.193056  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:25.235462  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.235485  991880 cri.go:89] found id: ""
	I0314 19:30:25.235493  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:25.235543  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.239980  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:25.240048  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:25.288524  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.288545  991880 cri.go:89] found id: ""
	I0314 19:30:25.288554  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:25.288604  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.294625  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:25.294680  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:25.332862  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:25.332884  991880 cri.go:89] found id: ""
	I0314 19:30:25.332891  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:25.332949  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.337918  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:25.337993  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:25.379537  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:25.379569  991880 cri.go:89] found id: ""
	I0314 19:30:25.379578  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:25.379640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.385396  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:25.385471  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:25.426549  991880 cri.go:89] found id: ""
	I0314 19:30:25.426584  991880 logs.go:276] 0 containers: []
	W0314 19:30:25.426596  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:25.426603  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:25.426676  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:25.468021  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:25.468048  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.468054  991880 cri.go:89] found id: ""
	I0314 19:30:25.468065  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:25.468134  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.473277  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.477669  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:25.477690  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:25.530477  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:25.530521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.586949  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:25.586985  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.629933  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:25.629972  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.675919  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:25.675955  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.724439  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:25.724477  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:25.790827  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:25.790864  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:25.808176  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:25.808223  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:25.925583  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:25.925621  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.972184  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:25.972237  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:26.018051  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:26.018083  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:26.080100  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:26.080141  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:26.117235  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:26.117276  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:29.005596  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:30:29.005628  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.005632  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.005636  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.005639  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.005642  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.005645  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.005651  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.005657  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.005665  991880 system_pods.go:74] duration metric: took 3.918081505s to wait for pod list to return data ...
	I0314 19:30:29.005672  991880 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:29.008145  991880 default_sa.go:45] found service account: "default"
	I0314 19:30:29.008172  991880 default_sa.go:55] duration metric: took 2.493629ms for default service account to be created ...
	I0314 19:30:29.008181  991880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:29.013603  991880 system_pods.go:86] 8 kube-system pods found
	I0314 19:30:29.013629  991880 system_pods.go:89] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.013641  991880 system_pods.go:89] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.013646  991880 system_pods.go:89] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.013650  991880 system_pods.go:89] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.013654  991880 system_pods.go:89] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.013658  991880 system_pods.go:89] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.013665  991880 system_pods.go:89] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.013673  991880 system_pods.go:89] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.013683  991880 system_pods.go:126] duration metric: took 5.49627ms to wait for k8s-apps to be running ...
	I0314 19:30:29.013692  991880 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:29.013744  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:29.033211  991880 system_svc.go:56] duration metric: took 19.509127ms WaitForService to wait for kubelet
	I0314 19:30:29.033244  991880 kubeadm.go:576] duration metric: took 4m24.537554048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:29.033262  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:29.036387  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:29.036409  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:29.036419  991880 node_conditions.go:105] duration metric: took 3.152496ms to run NodePressure ...
	I0314 19:30:29.036432  991880 start.go:240] waiting for startup goroutines ...
	I0314 19:30:29.036441  991880 start.go:245] waiting for cluster config update ...
	I0314 19:30:29.036455  991880 start.go:254] writing updated cluster config ...
	I0314 19:30:29.036755  991880 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:29.086638  991880 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 19:30:29.088767  991880 out.go:177] * Done! kubectl is now configured to use "no-preload-731976" cluster and "default" namespace by default
	I0314 19:30:26.670634  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.170774  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.670460  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.170571  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.670199  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.170324  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.670849  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.170021  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.670974  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.170929  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.670790  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.170127  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.670598  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.170188  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.670057  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.800524  992563 kubeadm.go:1106] duration metric: took 11.863480183s to wait for elevateKubeSystemPrivileges
	W0314 19:30:33.800567  992563 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:30:33.800577  992563 kubeadm.go:393] duration metric: took 5m9.94050972s to StartCluster
	I0314 19:30:33.800600  992563 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.800688  992563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:30:33.802311  992563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.802593  992563 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:30:33.804369  992563 out.go:177] * Verifying Kubernetes components...
	I0314 19:30:33.802658  992563 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:30:33.802827  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:30:33.806017  992563 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806030  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:30:33.806039  992563 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806035  992563 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806064  992563 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-440341"
	I0314 19:30:33.806070  992563 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.806078  992563 addons.go:243] addon storage-provisioner should already be in state true
	W0314 19:30:33.806079  992563 addons.go:243] addon metrics-server should already be in state true
	I0314 19:30:33.806077  992563 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-440341"
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806494  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806502  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806518  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806535  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806588  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806621  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.822764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0314 19:30:33.823247  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.823804  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.823832  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.824297  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.824872  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.824921  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.826625  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0314 19:30:33.826764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0314 19:30:33.827172  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827235  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827776  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827802  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.827915  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827936  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.828247  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.828442  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.829152  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.829934  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.829979  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.832093  992563 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.832117  992563 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:30:33.832150  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.832523  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.832567  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.847051  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I0314 19:30:33.847665  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.848345  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.848364  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.848731  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.848903  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.850545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.852435  992563 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:30:33.851181  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I0314 19:30:33.852606  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44445
	I0314 19:30:33.853975  992563 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:33.853991  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:30:33.854010  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.852808  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854344  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854857  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.854878  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855189  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.855207  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855286  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855613  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855925  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.856281  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.856302  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.857423  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.857734  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.859391  992563 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:30:33.858135  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.858383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.860539  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:30:33.860554  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:30:33.860561  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.860569  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.860651  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.860790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.860935  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.862967  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863319  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.863339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863428  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.863627  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.863738  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.863908  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.880826  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I0314 19:30:33.881345  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.881799  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.881818  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.882187  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.882341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.884263  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.884589  992563 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:33.884607  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:30:33.884625  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.887557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.887921  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.887945  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.888190  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.888503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.888670  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.888773  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:34.034473  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:30:34.090558  992563 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129103  992563 node_ready.go:49] node "default-k8s-diff-port-440341" has status "Ready":"True"
	I0314 19:30:34.129135  992563 node_ready.go:38] duration metric: took 38.535795ms for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129148  992563 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:34.137612  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:34.186085  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:30:34.186105  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:30:34.218932  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:34.220858  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:30:34.220881  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:30:34.235535  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:34.356161  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:34.356196  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:30:34.486555  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:36.162952  992563 pod_ready.go:92] pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.162991  992563 pod_ready.go:81] duration metric: took 2.025345367s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.163005  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171143  992563 pod_ready.go:92] pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.171227  992563 pod_ready.go:81] duration metric: took 8.211826ms for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171254  992563 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182222  992563 pod_ready.go:92] pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.182246  992563 pod_ready.go:81] duration metric: took 10.963779ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182255  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196349  992563 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.196375  992563 pod_ready.go:81] duration metric: took 14.113911ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196385  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201427  992563 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.201448  992563 pod_ready.go:81] duration metric: took 5.056279ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201456  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.470967  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.235390903s)
	I0314 19:30:36.471092  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471113  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471179  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.252178888s)
	I0314 19:30:36.471229  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471250  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471529  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471565  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471565  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471576  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471583  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471605  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471626  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471589  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471854  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471876  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.472161  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.472167  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.472186  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.491529  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.491557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.491867  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.491887  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.546393  992563 pod_ready.go:92] pod "kube-proxy-h7hdc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.546418  992563 pod_ready.go:81] duration metric: took 344.955471ms for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.546427  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.619091  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.132488028s)
	I0314 19:30:36.619147  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619165  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619443  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619459  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619469  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619477  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619809  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619839  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619847  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.619851  992563 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-440341"
	I0314 19:30:36.621595  992563 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 19:30:36.622935  992563 addons.go:505] duration metric: took 2.820276683s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 19:30:36.950079  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.950112  992563 pod_ready.go:81] duration metric: took 403.67651ms for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.950124  992563 pod_ready.go:38] duration metric: took 2.820962547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:36.950145  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:36.950212  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:37.024696  992563 api_server.go:72] duration metric: took 3.222061457s to wait for apiserver process to appear ...
	I0314 19:30:37.024728  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:37.024754  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:30:37.031369  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:30:37.033114  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:30:37.033137  992563 api_server.go:131] duration metric: took 8.40225ms to wait for apiserver health ...
	I0314 19:30:37.033145  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:37.157219  992563 system_pods.go:59] 9 kube-system pods found
	I0314 19:30:37.157256  992563 system_pods.go:61] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.157263  992563 system_pods.go:61] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.157269  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.157276  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.157282  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.157286  992563 system_pods.go:61] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.157291  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.157300  992563 system_pods.go:61] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.157308  992563 system_pods.go:61] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.157323  992563 system_pods.go:74] duration metric: took 124.170301ms to wait for pod list to return data ...
	I0314 19:30:37.157336  992563 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:37.343573  992563 default_sa.go:45] found service account: "default"
	I0314 19:30:37.343602  992563 default_sa.go:55] duration metric: took 186.253477ms for default service account to be created ...
	I0314 19:30:37.343620  992563 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:37.549907  992563 system_pods.go:86] 9 kube-system pods found
	I0314 19:30:37.549947  992563 system_pods.go:89] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.549955  992563 system_pods.go:89] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.549962  992563 system_pods.go:89] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.549969  992563 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.549977  992563 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.549982  992563 system_pods.go:89] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.549987  992563 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.549998  992563 system_pods.go:89] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.550007  992563 system_pods.go:89] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.550022  992563 system_pods.go:126] duration metric: took 206.393584ms to wait for k8s-apps to be running ...
	I0314 19:30:37.550039  992563 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:37.550098  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:37.568337  992563 system_svc.go:56] duration metric: took 18.290339ms WaitForService to wait for kubelet
	I0314 19:30:37.568369  992563 kubeadm.go:576] duration metric: took 3.765742034s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:37.568396  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:37.747892  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:37.747931  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:37.747945  992563 node_conditions.go:105] duration metric: took 179.543321ms to run NodePressure ...
	I0314 19:30:37.747959  992563 start.go:240] waiting for startup goroutines ...
	I0314 19:30:37.747969  992563 start.go:245] waiting for cluster config update ...
	I0314 19:30:37.747984  992563 start.go:254] writing updated cluster config ...
	I0314 19:30:37.748310  992563 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:37.800491  992563 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:30:37.802410  992563 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-440341" cluster and "default" namespace by default
	I0314 19:31:02.414037  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:31:02.414153  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:31:02.415801  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:02.415891  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:02.415997  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:02.416110  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:02.416236  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:02.416324  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:02.418205  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:02.418304  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:02.418377  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:02.418455  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:02.418519  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:02.418629  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:02.418704  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:02.418793  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:02.418892  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:02.419018  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:02.419129  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:02.419184  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:02.419270  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:02.419347  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:02.419421  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:02.419528  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:02.419624  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:02.419808  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:02.419914  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:02.419951  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:02.420007  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:02.421520  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:02.421603  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:02.421669  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:02.421753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:02.421844  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:02.422023  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:02.422092  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:02.422167  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422353  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422458  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422731  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422812  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422970  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423032  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423228  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423333  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423479  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423488  992344 kubeadm.go:309] 
	I0314 19:31:02.423519  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:31:02.423552  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:31:02.423558  992344 kubeadm.go:309] 
	I0314 19:31:02.423601  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:31:02.423643  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:31:02.423770  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:31:02.423780  992344 kubeadm.go:309] 
	I0314 19:31:02.423912  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:31:02.423949  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:31:02.424001  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:31:02.424012  992344 kubeadm.go:309] 
	I0314 19:31:02.424141  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:31:02.424269  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:31:02.424280  992344 kubeadm.go:309] 
	I0314 19:31:02.424405  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:31:02.424481  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:31:02.424542  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:31:02.424606  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:31:02.424638  992344 kubeadm.go:309] 
	W0314 19:31:02.424800  992344 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 19:31:02.424887  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:31:03.827325  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.402406647s)
	I0314 19:31:03.827421  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:31:03.845125  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:31:03.856796  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:31:03.856821  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:31:03.856875  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:31:03.868304  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:31:03.868359  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:31:03.879608  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:31:03.891002  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:31:03.891068  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:31:03.902543  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.913159  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:31:03.913212  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.926194  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:31:03.937276  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:31:03.937344  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:31:03.949719  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:31:04.026772  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:04.026841  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:04.195658  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:04.195816  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:04.195973  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:04.416776  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:04.418845  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:04.418937  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:04.419023  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:04.419125  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:04.419222  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:04.419321  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:04.419386  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:04.419869  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:04.420376  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:04.420786  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:04.421265  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:04.421447  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:04.421551  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:04.472916  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:04.572160  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:04.802131  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:04.892115  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:04.908810  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:04.910191  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:04.910266  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:05.076124  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:05.078423  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:05.078564  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:05.083626  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:05.083753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:05.084096  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:05.088164  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:45.090977  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:45.091099  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:45.091378  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:50.091571  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:50.091787  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:00.093031  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:00.093312  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:20.094443  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:20.094650  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096632  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:33:00.096929  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096948  992344 kubeadm.go:309] 
	I0314 19:33:00.096986  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:33:00.097021  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:33:00.097030  992344 kubeadm.go:309] 
	I0314 19:33:00.097059  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:33:00.097088  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:33:00.097203  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:33:00.097228  992344 kubeadm.go:309] 
	I0314 19:33:00.097345  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:33:00.097394  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:33:00.097451  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:33:00.097461  992344 kubeadm.go:309] 
	I0314 19:33:00.097572  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:33:00.097673  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:33:00.097685  992344 kubeadm.go:309] 
	I0314 19:33:00.097865  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:33:00.098003  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:33:00.098105  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:33:00.098202  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:33:00.098221  992344 kubeadm.go:309] 
	I0314 19:33:00.098939  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:33:00.099069  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:33:00.099160  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:33:00.099254  992344 kubeadm.go:393] duration metric: took 7m59.845612375s to StartCluster
	I0314 19:33:00.099339  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:33:00.099422  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:33:00.151833  992344 cri.go:89] found id: ""
	I0314 19:33:00.151861  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.151869  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:33:00.151876  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:33:00.151943  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:33:00.196473  992344 cri.go:89] found id: ""
	I0314 19:33:00.196508  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.196519  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:33:00.196526  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:33:00.196595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:33:00.233150  992344 cri.go:89] found id: ""
	I0314 19:33:00.233193  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.233207  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:33:00.233217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:33:00.233292  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:33:00.273142  992344 cri.go:89] found id: ""
	I0314 19:33:00.273183  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.273196  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:33:00.273205  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:33:00.273274  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:33:00.311472  992344 cri.go:89] found id: ""
	I0314 19:33:00.311510  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.311523  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:33:00.311544  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:33:00.311618  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:33:00.352110  992344 cri.go:89] found id: ""
	I0314 19:33:00.352138  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.352146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:33:00.352152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:33:00.352230  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:33:00.399016  992344 cri.go:89] found id: ""
	I0314 19:33:00.399050  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.399060  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:33:00.399068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:33:00.399140  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:33:00.436808  992344 cri.go:89] found id: ""
	I0314 19:33:00.436844  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.436857  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:33:00.436871  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:33:00.436889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:33:00.487696  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:33:00.487732  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:33:00.503591  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:33:00.503624  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:33:00.586980  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:33:00.587014  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:33:00.587033  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:33:00.697747  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:33:00.697805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 19:33:00.767728  992344 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 19:33:00.767799  992344 out.go:239] * 
	W0314 19:33:00.768013  992344 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.768052  992344 out.go:239] * 
	W0314 19:33:00.769333  992344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:33:00.772897  992344 out.go:177] 
	W0314 19:33:00.774102  992344 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.774165  992344 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 19:33:00.774200  992344 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 19:33:00.775839  992344 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 19:39:39 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:39.951682867Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=57fe463b-9dcf-4a03-869e-def1e7fcf51f name=/runtime.v1.RuntimeService/Version
	Mar 14 19:39:39 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:39.952563174Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=76b5b2bb-9941-4ff9-8c7e-ffa6dbb66e21 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:39 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:39.953098608Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445179952947582,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=76b5b2bb-9941-4ff9-8c7e-ffa6dbb66e21 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:39 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:39.953850878Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a6be0c23-e81a-4deb-867f-50fb0210cc95 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:39 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:39.953930680Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a6be0c23-e81a-4deb-867f-50fb0210cc95 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:39 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:39.954237565Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18d1cd32af6f4480fa9acae30922cb1716bd501d4d4b630436279aaf2ff32e85,PodSandboxId:5829a9a3b1c6a010bd414489521059562eeba5a38ee841089fcdb7decb148529,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444636939418198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daafd1bc-b1f1-4dab-b615-8364e22f984f,},Annotations:map[string]string{io.kubernetes.container.hash: a381b584,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516979e575152806b7e992fa23fa202f248f9e96eed77d258382809942991f60,PodSandboxId:6994be9d850c8643e713b276f02c18f574d0f385cfcec2b315c26fbb6151ba44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444635152845570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkhfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: b9955dbd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf12f4830373a332e2eef6d488a87bed2e807f68caf517e80ef47633cc1cda,PodSandboxId:dc6b5b0e66cf5ff7159c8e267956d514251cf22ef63d2693914ea964eb304d0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444634938047525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g4dzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9e849b06-74f4-4d8e-95b1-16136db8faee,},Annotations:map[string]string{io.kubernetes.container.hash: cf61d8ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a8fc0140117e11736c5e998cf4ecf4fd65d03db8f29c724e16adab62164d1,PodSandboxId:c04522b95d25f10257363d784ea688705042ac8291ecf3eccf388f6430bdd837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING
,CreatedAt:1710444634276474413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7hdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2e6b4f3-8ba9-4f0a-8e04-b289699b1017,},Annotations:map[string]string{io.kubernetes.container.hash: 25512ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e22802c857e595313e8ffed12f8ea74ae79654da84f7c59fc68dbd38c35da9,PodSandboxId:a057ad8c936229a0acbb1353bead9a62d58af4e080344b06fae36741b8c2039c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444615073202849,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29a6694ff3899537a529b8ebec96a741,},Annotations:map[string]string{io.kubernetes.container.hash: b12ca03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b07559507142186ee81654ae4cf97ce86f061922b12368c9877cbf96b771f,PodSandboxId:2ddea20fe163fe0c210d9ffe265689057dc8930f69f9b430a2478468af1a5bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444615091408892,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e68f66e6b599f1d5cb92b8b9be82039e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e071343bddeb260f6aae138da323d35bb8919bad790bc11ab42c79b999b5c8c8,PodSandboxId:26576956c83ee2d66e0fe60a9d164102e533d5524d572422fb6d54ba395e4322,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444615009347147,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a372661903be7fb35d52cbda0251c8,},Annotations:map[string]string{io.kubernetes.container.hash: 924a1e92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978f9b7e919fb85a862ca88deab99fb2dc10c7e28a3b187f958108dc1e972295,PodSandboxId:1aace104961e9a73a09cb825adb5f5064b06f1e0b9cf34c303fccb0d3135441a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444615003763952,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0131674def7e568083bac27c383f9e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a6be0c23-e81a-4deb-867f-50fb0210cc95 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.000920225Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4d9d6bbf-d16d-4d56-8dd6-51e37836ff2c name=/runtime.v1.RuntimeService/Version
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.001095729Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4d9d6bbf-d16d-4d56-8dd6-51e37836ff2c name=/runtime.v1.RuntimeService/Version
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.002462717Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=af2232e8-6f45-4b62-b1b3-a24f096ef926 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.003519796Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445180003495289,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=af2232e8-6f45-4b62-b1b3-a24f096ef926 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.004083375Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=18bacef9-7907-4015-a0a2-e575dcec4419 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.004186547Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=18bacef9-7907-4015-a0a2-e575dcec4419 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.004340895Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18d1cd32af6f4480fa9acae30922cb1716bd501d4d4b630436279aaf2ff32e85,PodSandboxId:5829a9a3b1c6a010bd414489521059562eeba5a38ee841089fcdb7decb148529,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444636939418198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daafd1bc-b1f1-4dab-b615-8364e22f984f,},Annotations:map[string]string{io.kubernetes.container.hash: a381b584,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516979e575152806b7e992fa23fa202f248f9e96eed77d258382809942991f60,PodSandboxId:6994be9d850c8643e713b276f02c18f574d0f385cfcec2b315c26fbb6151ba44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444635152845570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkhfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: b9955dbd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf12f4830373a332e2eef6d488a87bed2e807f68caf517e80ef47633cc1cda,PodSandboxId:dc6b5b0e66cf5ff7159c8e267956d514251cf22ef63d2693914ea964eb304d0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444634938047525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g4dzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9e849b06-74f4-4d8e-95b1-16136db8faee,},Annotations:map[string]string{io.kubernetes.container.hash: cf61d8ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a8fc0140117e11736c5e998cf4ecf4fd65d03db8f29c724e16adab62164d1,PodSandboxId:c04522b95d25f10257363d784ea688705042ac8291ecf3eccf388f6430bdd837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING
,CreatedAt:1710444634276474413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7hdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2e6b4f3-8ba9-4f0a-8e04-b289699b1017,},Annotations:map[string]string{io.kubernetes.container.hash: 25512ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e22802c857e595313e8ffed12f8ea74ae79654da84f7c59fc68dbd38c35da9,PodSandboxId:a057ad8c936229a0acbb1353bead9a62d58af4e080344b06fae36741b8c2039c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444615073202849,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29a6694ff3899537a529b8ebec96a741,},Annotations:map[string]string{io.kubernetes.container.hash: b12ca03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b07559507142186ee81654ae4cf97ce86f061922b12368c9877cbf96b771f,PodSandboxId:2ddea20fe163fe0c210d9ffe265689057dc8930f69f9b430a2478468af1a5bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444615091408892,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e68f66e6b599f1d5cb92b8b9be82039e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e071343bddeb260f6aae138da323d35bb8919bad790bc11ab42c79b999b5c8c8,PodSandboxId:26576956c83ee2d66e0fe60a9d164102e533d5524d572422fb6d54ba395e4322,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444615009347147,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a372661903be7fb35d52cbda0251c8,},Annotations:map[string]string{io.kubernetes.container.hash: 924a1e92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978f9b7e919fb85a862ca88deab99fb2dc10c7e28a3b187f958108dc1e972295,PodSandboxId:1aace104961e9a73a09cb825adb5f5064b06f1e0b9cf34c303fccb0d3135441a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444615003763952,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0131674def7e568083bac27c383f9e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=18bacef9-7907-4015-a0a2-e575dcec4419 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.038485845Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=c7584eb2-82bf-45e0-a6c0-8ea4ccac2c15 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.038580695Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=c7584eb2-82bf-45e0-a6c0-8ea4ccac2c15 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.040085977Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=4d4f65e6-a3aa-4754-8382-a5fc27ec8864 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.040458891Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445180040433095,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=4d4f65e6-a3aa-4754-8382-a5fc27ec8864 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.041327840Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=c65e1ddc-1c53-46b3-8051-a309471aed23 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.041538556Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:bc410b26725aa0475a8bf8688797b1b68dbe19069d551bc2b18919f73ad48770,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-p7s4d,Uid:1b13ae7e-62a0-429c-bf4f-0f38b222db7e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444636821591969,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-p7s4d,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 1b13ae7e-62a0-429c-bf4f-0f38b222db7e,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:30:36.503586583Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:5829a9a3b1c6a010bd414489521059562eeba5a38ee841089fcdb7decb148529,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:daafd1bc-b1f1-4dab-b615-8364
e22f984f,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444636786630925,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daafd1bc-b1f1-4dab-b615-8364e22f984f,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provision
er\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-14T19:30:36.478961957Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:6994be9d850c8643e713b276f02c18f574d0f385cfcec2b315c26fbb6151ba44,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-qkhfs,Uid:ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444634418180327,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-qkhfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:30:34.094394733Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:dc6b5b0e66cf5ff7159c8e267956d514251cf22ef63d2693914ea964eb304d0f,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-g4dzq,Uid:9e849b06
-74f4-4d8e-95b1-16136db8faee,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444634324477762,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-g4dzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 9e849b06-74f4-4d8e-95b1-16136db8faee,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:30:34.011387227Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c04522b95d25f10257363d784ea688705042ac8291ecf3eccf388f6430bdd837,Metadata:&PodSandboxMetadata{Name:kube-proxy-h7hdc,Uid:e2e6b4f3-8ba9-4f0a-8e04-b289699b1017,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444634052250730,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-h7hdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2e6b4f3-8ba9-4f0a-8e04-b289699b1017,k8s-app: kube-pro
xy,pod-template-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:30:33.738071865Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:2ddea20fe163fe0c210d9ffe265689057dc8930f69f9b430a2478468af1a5bef,Metadata:&PodSandboxMetadata{Name:kube-scheduler-default-k8s-diff-port-440341,Uid:e68f66e6b599f1d5cb92b8b9be82039e,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444614772521276,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e68f66e6b599f1d5cb92b8b9be82039e,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: e68f66e6b599f1d5cb92b8b9be82039e,kubernetes.io/config.seen: 2024-03-14T19:30:14.308059318Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:1aace104961e9a73a09cb825adb5f5064b06f1e0b9cf34c303fccb0d3135441a,Metadata:&PodSandb
oxMetadata{Name:kube-controller-manager-default-k8s-diff-port-440341,Uid:d0131674def7e568083bac27c383f9e4,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444614769145937,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0131674def7e568083bac27c383f9e4,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: d0131674def7e568083bac27c383f9e4,kubernetes.io/config.seen: 2024-03-14T19:30:14.308057936Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:26576956c83ee2d66e0fe60a9d164102e533d5524d572422fb6d54ba395e4322,Metadata:&PodSandboxMetadata{Name:kube-apiserver-default-k8s-diff-port-440341,Uid:c8a372661903be7fb35d52cbda0251c8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444614766501976,Labels:map[string]string{component: kube-apiserver,io.kubernete
s.container.name: POD,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a372661903be7fb35d52cbda0251c8,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.61.88:8444,kubernetes.io/config.hash: c8a372661903be7fb35d52cbda0251c8,kubernetes.io/config.seen: 2024-03-14T19:30:14.308055413Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a057ad8c936229a0acbb1353bead9a62d58af4e080344b06fae36741b8c2039c,Metadata:&PodSandboxMetadata{Name:etcd-default-k8s-diff-port-440341,Uid:29a6694ff3899537a529b8ebec96a741,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444614754377547,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29a6694ff3899537a529b8ebec96a741,tier: control-plane,},Annotati
ons:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.61.88:2379,kubernetes.io/config.hash: 29a6694ff3899537a529b8ebec96a741,kubernetes.io/config.seen: 2024-03-14T19:30:14.307960643Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=c65e1ddc-1c53-46b3-8051-a309471aed23 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.042240868Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=6e202bac-5eb5-4f89-ad6c-c0148d4d6824 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.042292063Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=6e202bac-5eb5-4f89-ad6c-c0148d4d6824 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.042489200Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18d1cd32af6f4480fa9acae30922cb1716bd501d4d4b630436279aaf2ff32e85,PodSandboxId:5829a9a3b1c6a010bd414489521059562eeba5a38ee841089fcdb7decb148529,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444636939418198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daafd1bc-b1f1-4dab-b615-8364e22f984f,},Annotations:map[string]string{io.kubernetes.container.hash: a381b584,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516979e575152806b7e992fa23fa202f248f9e96eed77d258382809942991f60,PodSandboxId:6994be9d850c8643e713b276f02c18f574d0f385cfcec2b315c26fbb6151ba44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444635152845570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkhfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: b9955dbd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf12f4830373a332e2eef6d488a87bed2e807f68caf517e80ef47633cc1cda,PodSandboxId:dc6b5b0e66cf5ff7159c8e267956d514251cf22ef63d2693914ea964eb304d0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444634938047525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g4dzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9e849b06-74f4-4d8e-95b1-16136db8faee,},Annotations:map[string]string{io.kubernetes.container.hash: cf61d8ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a8fc0140117e11736c5e998cf4ecf4fd65d03db8f29c724e16adab62164d1,PodSandboxId:c04522b95d25f10257363d784ea688705042ac8291ecf3eccf388f6430bdd837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING
,CreatedAt:1710444634276474413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7hdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2e6b4f3-8ba9-4f0a-8e04-b289699b1017,},Annotations:map[string]string{io.kubernetes.container.hash: 25512ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e22802c857e595313e8ffed12f8ea74ae79654da84f7c59fc68dbd38c35da9,PodSandboxId:a057ad8c936229a0acbb1353bead9a62d58af4e080344b06fae36741b8c2039c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444615073202849,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29a6694ff3899537a529b8ebec96a741,},Annotations:map[string]string{io.kubernetes.container.hash: b12ca03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b07559507142186ee81654ae4cf97ce86f061922b12368c9877cbf96b771f,PodSandboxId:2ddea20fe163fe0c210d9ffe265689057dc8930f69f9b430a2478468af1a5bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444615091408892,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e68f66e6b599f1d5cb92b8b9be82039e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e071343bddeb260f6aae138da323d35bb8919bad790bc11ab42c79b999b5c8c8,PodSandboxId:26576956c83ee2d66e0fe60a9d164102e533d5524d572422fb6d54ba395e4322,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444615009347147,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a372661903be7fb35d52cbda0251c8,},Annotations:map[string]string{io.kubernetes.container.hash: 924a1e92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978f9b7e919fb85a862ca88deab99fb2dc10c7e28a3b187f958108dc1e972295,PodSandboxId:1aace104961e9a73a09cb825adb5f5064b06f1e0b9cf34c303fccb0d3135441a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444615003763952,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0131674def7e568083bac27c383f9e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=6e202bac-5eb5-4f89-ad6c-c0148d4d6824 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.043722353Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b506f0ac-2c35-4f43-973d-429fb1237cb7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.043769440Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=b506f0ac-2c35-4f43-973d-429fb1237cb7 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:39:40 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:39:40.043915025Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18d1cd32af6f4480fa9acae30922cb1716bd501d4d4b630436279aaf2ff32e85,PodSandboxId:5829a9a3b1c6a010bd414489521059562eeba5a38ee841089fcdb7decb148529,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444636939418198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daafd1bc-b1f1-4dab-b615-8364e22f984f,},Annotations:map[string]string{io.kubernetes.container.hash: a381b584,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516979e575152806b7e992fa23fa202f248f9e96eed77d258382809942991f60,PodSandboxId:6994be9d850c8643e713b276f02c18f574d0f385cfcec2b315c26fbb6151ba44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444635152845570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkhfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: b9955dbd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf12f4830373a332e2eef6d488a87bed2e807f68caf517e80ef47633cc1cda,PodSandboxId:dc6b5b0e66cf5ff7159c8e267956d514251cf22ef63d2693914ea964eb304d0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444634938047525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g4dzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9e849b06-74f4-4d8e-95b1-16136db8faee,},Annotations:map[string]string{io.kubernetes.container.hash: cf61d8ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a8fc0140117e11736c5e998cf4ecf4fd65d03db8f29c724e16adab62164d1,PodSandboxId:c04522b95d25f10257363d784ea688705042ac8291ecf3eccf388f6430bdd837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING
,CreatedAt:1710444634276474413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7hdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2e6b4f3-8ba9-4f0a-8e04-b289699b1017,},Annotations:map[string]string{io.kubernetes.container.hash: 25512ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e22802c857e595313e8ffed12f8ea74ae79654da84f7c59fc68dbd38c35da9,PodSandboxId:a057ad8c936229a0acbb1353bead9a62d58af4e080344b06fae36741b8c2039c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444615073202849,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29a6694ff3899537a529b8ebec96a741,},Annotations:map[string]string{io.kubernetes.container.hash: b12ca03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b07559507142186ee81654ae4cf97ce86f061922b12368c9877cbf96b771f,PodSandboxId:2ddea20fe163fe0c210d9ffe265689057dc8930f69f9b430a2478468af1a5bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444615091408892,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e68f66e6b599f1d5cb92b8b9be82039e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e071343bddeb260f6aae138da323d35bb8919bad790bc11ab42c79b999b5c8c8,PodSandboxId:26576956c83ee2d66e0fe60a9d164102e533d5524d572422fb6d54ba395e4322,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444615009347147,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a372661903be7fb35d52cbda0251c8,},Annotations:map[string]string{io.kubernetes.container.hash: 924a1e92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978f9b7e919fb85a862ca88deab99fb2dc10c7e28a3b187f958108dc1e972295,PodSandboxId:1aace104961e9a73a09cb825adb5f5064b06f1e0b9cf34c303fccb0d3135441a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444615003763952,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0131674def7e568083bac27c383f9e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=b506f0ac-2c35-4f43-973d-429fb1237cb7 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	18d1cd32af6f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   9 minutes ago       Running             storage-provisioner       0                   5829a9a3b1c6a       storage-provisioner
	516979e575152       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   6994be9d850c8       coredns-5dd5756b68-qkhfs
	f2cf12f483037       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   9 minutes ago       Running             coredns                   0                   dc6b5b0e66cf5       coredns-5dd5756b68-g4dzq
	b90a8fc014011       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   9 minutes ago       Running             kube-proxy                0                   c04522b95d25f       kube-proxy-h7hdc
	093b075595071       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   9 minutes ago       Running             kube-scheduler            2                   2ddea20fe163f       kube-scheduler-default-k8s-diff-port-440341
	f2e22802c857e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   9 minutes ago       Running             etcd                      2                   a057ad8c93622       etcd-default-k8s-diff-port-440341
	e071343bddeb2       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   9 minutes ago       Running             kube-apiserver            2                   26576956c83ee       kube-apiserver-default-k8s-diff-port-440341
	978f9b7e919fb       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   9 minutes ago       Running             kube-controller-manager   2                   1aace104961e9       kube-controller-manager-default-k8s-diff-port-440341
	
	
	==> coredns [516979e575152806b7e992fa23fa202f248f9e96eed77d258382809942991f60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [f2cf12f4830373a332e2eef6d488a87bed2e807f68caf517e80ef47633cc1cda] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-440341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-440341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=default-k8s-diff-port-440341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T19_30_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:30:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-440341
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:39:33 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:35:47 +0000   Thu, 14 Mar 2024 19:30:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:35:47 +0000   Thu, 14 Mar 2024 19:30:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:35:47 +0000   Thu, 14 Mar 2024 19:30:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:35:47 +0000   Thu, 14 Mar 2024 19:30:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.88
	  Hostname:    default-k8s-diff-port-440341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1b94d723bbc44fdae20abac75fa217c
	  System UUID:                a1b94d72-3bbc-44fd-ae20-abac75fa217c
	  Boot ID:                    b763291f-3f1d-4c8f-a3df-481acb31857c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-g4dzq                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m7s
	  kube-system                 coredns-5dd5756b68-qkhfs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     9m6s
	  kube-system                 etcd-default-k8s-diff-port-440341                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         9m21s
	  kube-system                 kube-apiserver-default-k8s-diff-port-440341             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-controller-manager-default-k8s-diff-port-440341    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m18s
	  kube-system                 kube-proxy-h7hdc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m7s
	  kube-system                 kube-scheduler-default-k8s-diff-port-440341             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m20s
	  kube-system                 metrics-server-57f55c9bc5-p7s4d                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         9m4s
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         9m4s
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                    From             Message
	  ----    ------                   ----                   ----             -------
	  Normal  Starting                 9m4s                   kube-proxy       
	  Normal  Starting                 9m26s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m26s (x8 over 9m26s)  kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m26s (x8 over 9m26s)  kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m26s (x7 over 9m26s)  kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m26s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 9m19s                  kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  9m19s                  kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    9m19s                  kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     9m19s                  kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  9m19s                  kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           9m8s                   node-controller  Node default-k8s-diff-port-440341 event: Registered Node default-k8s-diff-port-440341 in Controller
	
	
	==> dmesg <==
	[  +0.052325] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.046737] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar14 19:25] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.580134] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.524848] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.010175] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.058083] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069272] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.175764] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.177169] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.323997] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +5.901003] systemd-fstab-generator[778]: Ignoring "noauto" option for root device
	[  +0.058042] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.894055] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +5.697405] kauditd_printk_skb: 97 callbacks suppressed
	[ +11.992905] kauditd_printk_skb: 77 callbacks suppressed
	[Mar14 19:30] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.917254] systemd-fstab-generator[3421]: Ignoring "noauto" option for root device
	[  +4.822406] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.973079] systemd-fstab-generator[3748]: Ignoring "noauto" option for root device
	[ +12.558475] systemd-fstab-generator[3947]: Ignoring "noauto" option for root device
	[  +0.106553] kauditd_printk_skb: 14 callbacks suppressed
	[Mar14 19:31] kauditd_printk_skb: 80 callbacks suppressed
	
	
	==> etcd [f2e22802c857e595313e8ffed12f8ea74ae79654da84f7c59fc68dbd38c35da9] <==
	{"level":"info","ts":"2024-03-14T19:30:15.459926Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c switched to configuration voters=(11821540688919011932)"}
	{"level":"info","ts":"2024-03-14T19:30:15.460339Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"d6b7474e07060719","local-member-id":"a40e8c9f94d8225c","added-peer-id":"a40e8c9f94d8225c","added-peer-peer-urls":["https://192.168.61.88:2380"]}
	{"level":"info","ts":"2024-03-14T19:30:15.473837Z","caller":"embed/etcd.go:726","msg":"starting with client TLS","tls-info":"cert = /var/lib/minikube/certs/etcd/server.crt, key = /var/lib/minikube/certs/etcd/server.key, client-cert=, client-key=, trusted-ca = /var/lib/minikube/certs/etcd/ca.crt, client-cert-auth = true, crl-file = ","cipher-suites":[]}
	{"level":"info","ts":"2024-03-14T19:30:15.474183Z","caller":"embed/etcd.go:278","msg":"now serving peer/client/metrics","local-member-id":"a40e8c9f94d8225c","initial-advertise-peer-urls":["https://192.168.61.88:2380"],"listen-peer-urls":["https://192.168.61.88:2380"],"advertise-client-urls":["https://192.168.61.88:2379"],"listen-client-urls":["https://127.0.0.1:2379","https://192.168.61.88:2379"],"listen-metrics-urls":["http://127.0.0.1:2381"]}
	{"level":"info","ts":"2024-03-14T19:30:15.474198Z","caller":"embed/etcd.go:597","msg":"serving peer traffic","address":"192.168.61.88:2380"}
	{"level":"info","ts":"2024-03-14T19:30:15.477069Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.61.88:2380"}
	{"level":"info","ts":"2024-03-14T19:30:15.47506Z","caller":"embed/etcd.go:855","msg":"serving metrics","address":"http://127.0.0.1:2381"}
	{"level":"info","ts":"2024-03-14T19:30:16.128083Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-14T19:30:16.128119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-14T19:30:16.128142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c received MsgPreVoteResp from a40e8c9f94d8225c at term 1"}
	{"level":"info","ts":"2024-03-14T19:30:16.128156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c became candidate at term 2"}
	{"level":"info","ts":"2024-03-14T19:30:16.128162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c received MsgVoteResp from a40e8c9f94d8225c at term 2"}
	{"level":"info","ts":"2024-03-14T19:30:16.12817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c became leader at term 2"}
	{"level":"info","ts":"2024-03-14T19:30:16.128177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a40e8c9f94d8225c elected leader a40e8c9f94d8225c at term 2"}
	{"level":"info","ts":"2024-03-14T19:30:16.132129Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:30:16.136882Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a40e8c9f94d8225c","local-member-attributes":"{Name:default-k8s-diff-port-440341 ClientURLs:[https://192.168.61.88:2379]}","request-path":"/0/members/a40e8c9f94d8225c/attributes","cluster-id":"d6b7474e07060719","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T19:30:16.136946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:30:16.140218Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.88:2379"}
	{"level":"info","ts":"2024-03-14T19:30:16.140286Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:30:16.141084Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T19:30:16.141449Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d6b7474e07060719","local-member-id":"a40e8c9f94d8225c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:30:16.141808Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:30:16.141905Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:30:16.187223Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T19:30:16.187284Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	
	
	==> kernel <==
	 19:39:40 up 14 min,  0 users,  load average: 0.03, 0.18, 0.18
	Linux default-k8s-diff-port-440341 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e071343bddeb260f6aae138da323d35bb8919bad790bc11ab42c79b999b5c8c8] <==
	W0314 19:35:19.146110       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:35:19.146174       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:35:19.146185       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:35:19.146261       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:35:19.146336       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:35:19.147482       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 19:36:18.017233       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 19:36:19.146320       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:36:19.146379       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:36:19.146390       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:36:19.148738       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:36:19.148809       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:36:19.148817       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 19:37:18.017314       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 19:38:18.017588       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 19:38:19.146789       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:38:19.146914       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:38:19.146924       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:38:19.149134       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:38:19.149313       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:38:19.149348       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 19:39:18.017196       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [978f9b7e919fb85a862ca88deab99fb2dc10c7e28a3b187f958108dc1e972295] <==
	I0314 19:34:06.757760       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="169.406µs"
	E0314 19:34:33.140090       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:34:33.608133       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:35:03.145829       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:35:03.617398       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:35:33.152896       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:35:33.627530       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:36:03.158504       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:36:03.636294       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:36:33.165050       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:36:33.646298       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 19:36:45.771180       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="450.476µs"
	I0314 19:36:56.754827       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="238.707µs"
	E0314 19:37:03.170527       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:37:03.655893       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:37:33.177538       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:37:33.668472       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:38:03.183542       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:38:03.680561       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:38:33.189156       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:38:33.691139       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:39:03.195674       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:39:03.703628       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:39:33.202546       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:39:33.716224       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b90a8fc0140117e11736c5e998cf4ecf4fd65d03db8f29c724e16adab62164d1] <==
	I0314 19:30:35.435124       1 server_others.go:69] "Using iptables proxy"
	I0314 19:30:35.479155       1 node.go:141] Successfully retrieved node IP: 192.168.61.88
	I0314 19:30:35.712378       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:30:35.712431       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:30:35.715701       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:30:35.741086       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:30:35.741384       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:30:35.741397       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:30:35.773192       1 config.go:188] "Starting service config controller"
	I0314 19:30:35.774265       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:30:35.774342       1 config.go:315] "Starting node config controller"
	I0314 19:30:35.774350       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:30:35.774875       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:30:35.774911       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:30:35.875108       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:30:35.875193       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:30:35.875458       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [093b07559507142186ee81654ae4cf97ce86f061922b12368c9877cbf96b771f] <==
	W0314 19:30:18.282536       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 19:30:18.284669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 19:30:18.283663       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 19:30:18.284686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 19:30:18.283733       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 19:30:18.284702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 19:30:18.284227       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 19:30:18.285213       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 19:30:19.116850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 19:30:19.117234       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 19:30:19.119726       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 19:30:19.119934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 19:30:19.163104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 19:30:19.163219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 19:30:19.196882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 19:30:19.197510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 19:30:19.280098       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 19:30:19.280150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 19:30:19.284653       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 19:30:19.284713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 19:30:19.291339       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 19:30:19.291388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 19:30:19.632261       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 19:30:19.632313       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:30:21.955251       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 19:37:21 default-k8s-diff-port-440341 kubelet[3755]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:37:21 default-k8s-diff-port-440341 kubelet[3755]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:37:21 default-k8s-diff-port-440341 kubelet[3755]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:37:23 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:37:23.739592    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:37:34 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:37:34.738231    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:37:48 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:37:48.737917    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:38:00 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:38:00.738437    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:38:14 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:38:14.737826    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:38:21 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:38:21.759361    3755 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:38:21 default-k8s-diff-port-440341 kubelet[3755]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:38:21 default-k8s-diff-port-440341 kubelet[3755]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:38:21 default-k8s-diff-port-440341 kubelet[3755]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:38:21 default-k8s-diff-port-440341 kubelet[3755]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:38:28 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:38:28.737793    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:38:39 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:38:39.737492    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:38:51 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:38:51.743812    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:39:02 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:39:02.738867    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:39:13 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:39:13.739238    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:39:21 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:39:21.759580    3755 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:39:21 default-k8s-diff-port-440341 kubelet[3755]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:39:21 default-k8s-diff-port-440341 kubelet[3755]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:39:21 default-k8s-diff-port-440341 kubelet[3755]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:39:21 default-k8s-diff-port-440341 kubelet[3755]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:39:27 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:39:27.738321    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:39:38 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:39:38.737860    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	
	
	==> storage-provisioner [18d1cd32af6f4480fa9acae30922cb1716bd501d4d4b630436279aaf2ff32e85] <==
	I0314 19:30:37.091465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 19:30:37.111088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 19:30:37.111165       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 19:30:37.121950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 19:30:37.122159       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-440341_14eef2c4-1b3e-492d-9da5-2d2c448f03c3!
	I0314 19:30:37.123430       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bd26e97-7a14-45c9-a3b4-49c925374eec", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-440341_14eef2c4-1b3e-492d-9da5-2d2c448f03c3 became leader
	I0314 19:30:37.222391       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-440341_14eef2c4-1b3e-492d-9da5-2d2c448f03c3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-440341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-p7s4d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-440341 describe pod metrics-server-57f55c9bc5-p7s4d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-440341 describe pod metrics-server-57f55c9bc5-p7s4d: exit status 1 (75.347444ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-p7s4d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-440341 describe pod metrics-server-57f55c9bc5-p7s4d: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/UserAppExistsAfterStop (544.19s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop
start_stop_delete_test.go:274: (dbg) TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
E0314 19:37:14.528067  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 19:37:14.853742  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
E0314 19:40:17.907716  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
start_stop_delete_test.go:274: ***** TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:274: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-968094 -n old-k8s-version-968094
start_stop_delete_test.go:274: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 2 (255.636511ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:274: status error: exit status 2 (may be ok)
start_stop_delete_test.go:274: "old-k8s-version-968094" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:275: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 2 (250.938117ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-968094 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-968094 logs -n 25: (1.510422244s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-578974 sudo                            | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-578974                                 | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:14 UTC |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-097195                           | kubernetes-upgrade-097195    | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:15 UTC |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-993602 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | disable-driver-mounts-993602                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:18 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-731976             | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-992669            | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC | 14 Mar 24 19:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-440341  | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC | 14 Mar 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-968094        | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-731976                  | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-992669                 | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-968094             | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-440341       | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:21 UTC | 14 Mar 24 19:30 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:21:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:21:06.641191  992563 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:21:06.641325  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641335  992563 out.go:304] Setting ErrFile to fd 2...
	I0314 19:21:06.641339  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641562  992563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:21:06.642133  992563 out.go:298] Setting JSON to false
	I0314 19:21:06.643097  992563 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":97419,"bootTime":1710346648,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:21:06.643154  992563 start.go:139] virtualization: kvm guest
	I0314 19:21:06.645619  992563 out.go:177] * [default-k8s-diff-port-440341] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:21:06.646948  992563 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:21:06.646951  992563 notify.go:220] Checking for updates...
	I0314 19:21:06.648183  992563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:21:06.649479  992563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:21:06.650646  992563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:21:06.651793  992563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:21:06.652871  992563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:21:06.654306  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:21:06.654679  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.654715  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.669822  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0314 19:21:06.670226  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.670730  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.670752  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.671113  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.671298  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.671562  992563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:21:06.671894  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.671955  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.686096  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
	I0314 19:21:06.686486  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.686930  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.686950  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.687304  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.687516  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.719775  992563 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 19:21:06.721100  992563 start.go:297] selected driver: kvm2
	I0314 19:21:06.721112  992563 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.721237  992563 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:21:06.722206  992563 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.722303  992563 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:21:06.737068  992563 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:21:06.737396  992563 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:21:06.737423  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:21:06.737430  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:21:06.737470  992563 start.go:340] cluster config:
	{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.737562  992563 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.739290  992563 out.go:177] * Starting "default-k8s-diff-port-440341" primary control-plane node in "default-k8s-diff-port-440341" cluster
	I0314 19:21:06.456441  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:06.740612  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:21:06.740639  992563 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 19:21:06.740649  992563 cache.go:56] Caching tarball of preloaded images
	I0314 19:21:06.740716  992563 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:21:06.740727  992563 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 19:21:06.740828  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:21:06.741044  992563 start.go:360] acquireMachinesLock for default-k8s-diff-port-440341: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:21:09.528474  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:15.608487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:18.680465  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:24.760483  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:27.832487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:33.912460  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:36.984446  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:43.064437  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:46.136461  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:52.216505  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:55.288457  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:01.368528  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:04.440444  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:10.520511  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:13.592559  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:19.672501  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:22.744517  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:28.824450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:31.896452  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:37.976513  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:41.048520  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:47.128498  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:50.200540  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:56.280558  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:59.352482  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:05.432488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:08.504481  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:14.584488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:17.656515  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:23.736418  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:26.808447  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:32.888521  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:35.960649  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:42.040524  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:45.112450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:51.192455  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:54.264715  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:00.344497  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:03.416432  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:06.421344  992056 start.go:364] duration metric: took 4m13.372196869s to acquireMachinesLock for "embed-certs-992669"
	I0314 19:24:06.421482  992056 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:06.421491  992056 fix.go:54] fixHost starting: 
	I0314 19:24:06.421996  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:06.422035  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:06.437799  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0314 19:24:06.438270  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:06.438847  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:24:06.438870  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:06.439255  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:06.439520  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:06.439648  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:24:06.441355  992056 fix.go:112] recreateIfNeeded on embed-certs-992669: state=Stopped err=<nil>
	I0314 19:24:06.441396  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	W0314 19:24:06.441578  992056 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:06.443265  992056 out.go:177] * Restarting existing kvm2 VM for "embed-certs-992669" ...
	I0314 19:24:06.444639  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Start
	I0314 19:24:06.444811  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring networks are active...
	I0314 19:24:06.445562  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network default is active
	I0314 19:24:06.445907  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network mk-embed-certs-992669 is active
	I0314 19:24:06.446291  992056 main.go:141] libmachine: (embed-certs-992669) Getting domain xml...
	I0314 19:24:06.446865  992056 main.go:141] libmachine: (embed-certs-992669) Creating domain...
	I0314 19:24:07.655936  992056 main.go:141] libmachine: (embed-certs-992669) Waiting to get IP...
	I0314 19:24:07.657162  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.657691  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.657795  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.657671  993021 retry.go:31] will retry after 279.188222ms: waiting for machine to come up
	I0314 19:24:07.938384  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.938890  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.938914  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.938842  993021 retry.go:31] will retry after 362.619543ms: waiting for machine to come up
	I0314 19:24:06.418272  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:06.418393  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.418709  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:24:06.418745  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.419028  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:24:06.421200  991880 machine.go:97] duration metric: took 4m37.410478688s to provisionDockerMachine
	I0314 19:24:06.421248  991880 fix.go:56] duration metric: took 4m37.431639776s for fixHost
	I0314 19:24:06.421257  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 4m37.431664509s
	W0314 19:24:06.421278  991880 start.go:713] error starting host: provision: host is not running
	W0314 19:24:06.421471  991880 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 19:24:06.421480  991880 start.go:728] Will try again in 5 seconds ...
	I0314 19:24:08.303564  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.304022  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.304049  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.303988  993021 retry.go:31] will retry after 299.406141ms: waiting for machine to come up
	I0314 19:24:08.605486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.605955  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.605983  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.605903  993021 retry.go:31] will retry after 438.174832ms: waiting for machine to come up
	I0314 19:24:09.045423  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.045943  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.045985  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.045874  993021 retry.go:31] will retry after 484.342881ms: waiting for machine to come up
	I0314 19:24:09.531525  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.531992  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.532032  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.531943  993021 retry.go:31] will retry after 680.030854ms: waiting for machine to come up
	I0314 19:24:10.213303  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:10.213760  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:10.213787  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:10.213714  993021 retry.go:31] will retry after 1.051377672s: waiting for machine to come up
	I0314 19:24:11.267112  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:11.267711  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:11.267736  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:11.267647  993021 retry.go:31] will retry after 1.45882013s: waiting for machine to come up
	I0314 19:24:12.729033  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:12.729529  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:12.729565  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:12.729476  993021 retry.go:31] will retry after 1.6586819s: waiting for machine to come up
	I0314 19:24:11.423018  991880 start.go:360] acquireMachinesLock for no-preload-731976: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:24:14.390304  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:14.390783  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:14.390813  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:14.390731  993021 retry.go:31] will retry after 1.484880543s: waiting for machine to come up
	I0314 19:24:15.877389  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:15.877877  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:15.877907  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:15.877817  993021 retry.go:31] will retry after 2.524223695s: waiting for machine to come up
	I0314 19:24:18.405110  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:18.405486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:18.405517  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:18.405433  993021 retry.go:31] will retry after 3.354970224s: waiting for machine to come up
	I0314 19:24:21.761886  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:21.762325  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:21.762374  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:21.762285  993021 retry.go:31] will retry after 3.996500899s: waiting for machine to come up
	I0314 19:24:27.129245  992344 start.go:364] duration metric: took 3m53.310661355s to acquireMachinesLock for "old-k8s-version-968094"
	I0314 19:24:27.129312  992344 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:27.129324  992344 fix.go:54] fixHost starting: 
	I0314 19:24:27.129726  992344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:27.129761  992344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:27.150444  992344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0314 19:24:27.150921  992344 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:27.151429  992344 main.go:141] libmachine: Using API Version  1
	I0314 19:24:27.151453  992344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:27.151859  992344 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:27.152058  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:27.152265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetState
	I0314 19:24:27.153847  992344 fix.go:112] recreateIfNeeded on old-k8s-version-968094: state=Stopped err=<nil>
	I0314 19:24:27.153876  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	W0314 19:24:27.154051  992344 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:27.156243  992344 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-968094" ...
	I0314 19:24:25.763430  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.763935  992056 main.go:141] libmachine: (embed-certs-992669) Found IP for machine: 192.168.50.213
	I0314 19:24:25.763962  992056 main.go:141] libmachine: (embed-certs-992669) Reserving static IP address...
	I0314 19:24:25.763974  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has current primary IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.764419  992056 main.go:141] libmachine: (embed-certs-992669) Reserved static IP address: 192.168.50.213
	I0314 19:24:25.764444  992056 main.go:141] libmachine: (embed-certs-992669) Waiting for SSH to be available...
	I0314 19:24:25.764467  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.764546  992056 main.go:141] libmachine: (embed-certs-992669) DBG | skip adding static IP to network mk-embed-certs-992669 - found existing host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"}
	I0314 19:24:25.764568  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Getting to WaitForSSH function...
	I0314 19:24:25.766675  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767018  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.767048  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767190  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH client type: external
	I0314 19:24:25.767237  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa (-rw-------)
	I0314 19:24:25.767278  992056 main.go:141] libmachine: (embed-certs-992669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:25.767299  992056 main.go:141] libmachine: (embed-certs-992669) DBG | About to run SSH command:
	I0314 19:24:25.767312  992056 main.go:141] libmachine: (embed-certs-992669) DBG | exit 0
	I0314 19:24:25.892385  992056 main.go:141] libmachine: (embed-certs-992669) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:25.892837  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetConfigRaw
	I0314 19:24:25.893525  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:25.895998  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896372  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.896411  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896708  992056 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/config.json ...
	I0314 19:24:25.896897  992056 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:25.896917  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:25.897155  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:25.899572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899856  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.899882  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:25.900241  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900594  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:25.900763  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:25.901166  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:25.901185  992056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:26.013286  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:26.013326  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013609  992056 buildroot.go:166] provisioning hostname "embed-certs-992669"
	I0314 19:24:26.013640  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013843  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.016614  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017006  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.017041  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.017397  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017596  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017746  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.017903  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.018131  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.018152  992056 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-992669 && echo "embed-certs-992669" | sudo tee /etc/hostname
	I0314 19:24:26.143977  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-992669
	
	I0314 19:24:26.144009  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.146661  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147021  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.147052  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147182  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.147387  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147542  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147677  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.147856  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.148037  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.148053  992056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-992669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-992669/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-992669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:26.266363  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:26.266400  992056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:26.266421  992056 buildroot.go:174] setting up certificates
	I0314 19:24:26.266430  992056 provision.go:84] configureAuth start
	I0314 19:24:26.266439  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.266755  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:26.269450  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269803  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.269833  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.272179  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272519  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.272572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272709  992056 provision.go:143] copyHostCerts
	I0314 19:24:26.272812  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:26.272823  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:26.272892  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:26.272992  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:26.273007  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:26.273034  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:26.273086  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:26.273093  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:26.273113  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:26.273199  992056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.embed-certs-992669 san=[127.0.0.1 192.168.50.213 embed-certs-992669 localhost minikube]
	I0314 19:24:26.424098  992056 provision.go:177] copyRemoteCerts
	I0314 19:24:26.424165  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:26.424193  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.426870  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427216  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.427293  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427367  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.427559  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.427745  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.427889  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.514935  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:26.542295  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0314 19:24:26.568557  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:24:26.595238  992056 provision.go:87] duration metric: took 328.794871ms to configureAuth
	I0314 19:24:26.595266  992056 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:26.595465  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:24:26.595587  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.598447  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598776  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.598810  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598958  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.599149  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599341  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599446  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.599576  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.599763  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.599784  992056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:26.883323  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:26.883363  992056 machine.go:97] duration metric: took 986.450882ms to provisionDockerMachine
	I0314 19:24:26.883378  992056 start.go:293] postStartSetup for "embed-certs-992669" (driver="kvm2")
	I0314 19:24:26.883393  992056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:26.883425  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:26.883799  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:26.883840  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.886707  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887088  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.887121  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887271  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.887471  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.887685  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.887842  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.972276  992056 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:26.977397  992056 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:26.977452  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:26.977557  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:26.977660  992056 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:26.977771  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:26.989997  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:27.015656  992056 start.go:296] duration metric: took 132.26294ms for postStartSetup
	I0314 19:24:27.015701  992056 fix.go:56] duration metric: took 20.594210437s for fixHost
	I0314 19:24:27.015723  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.018428  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018779  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.018820  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018934  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.019141  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019322  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019477  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.019663  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:27.019904  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:27.019918  992056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:27.129041  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444267.099898940
	
	I0314 19:24:27.129062  992056 fix.go:216] guest clock: 1710444267.099898940
	I0314 19:24:27.129070  992056 fix.go:229] Guest: 2024-03-14 19:24:27.09989894 +0000 UTC Remote: 2024-03-14 19:24:27.015704928 +0000 UTC m=+274.119026995 (delta=84.194012ms)
	I0314 19:24:27.129129  992056 fix.go:200] guest clock delta is within tolerance: 84.194012ms
	I0314 19:24:27.129134  992056 start.go:83] releasing machines lock for "embed-certs-992669", held for 20.707742604s
	I0314 19:24:27.129165  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.129445  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:27.132300  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132666  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.132696  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132891  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133513  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133729  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133832  992056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:27.133885  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.133989  992056 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:27.134020  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.136789  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137077  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137149  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137173  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137340  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.137694  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.137732  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137870  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.138017  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.138177  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.138423  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.241866  992056 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:27.248597  992056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:27.398034  992056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:27.404793  992056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:27.404866  992056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:27.425321  992056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:27.425347  992056 start.go:494] detecting cgroup driver to use...
	I0314 19:24:27.425441  992056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:27.446847  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:27.463193  992056 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:27.463248  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:27.477995  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:27.494158  992056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:27.626812  992056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:27.788432  992056 docker.go:233] disabling docker service ...
	I0314 19:24:27.788504  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:27.805552  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:27.820563  992056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:27.961941  992056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:28.083364  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:28.099491  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:28.121026  992056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:24:28.121100  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.133361  992056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:28.133445  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.145489  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.158112  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.171221  992056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:28.184604  992056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:28.196001  992056 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:28.196052  992056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:28.212800  992056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:28.225099  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:28.353741  992056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:28.497018  992056 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:28.497123  992056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:28.502406  992056 start.go:562] Will wait 60s for crictl version
	I0314 19:24:28.502464  992056 ssh_runner.go:195] Run: which crictl
	I0314 19:24:28.506848  992056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:28.546552  992056 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:28.546640  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.580646  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.613244  992056 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:24:27.157735  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .Start
	I0314 19:24:27.157923  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring networks are active...
	I0314 19:24:27.158602  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network default is active
	I0314 19:24:27.158940  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network mk-old-k8s-version-968094 is active
	I0314 19:24:27.159464  992344 main.go:141] libmachine: (old-k8s-version-968094) Getting domain xml...
	I0314 19:24:27.160230  992344 main.go:141] libmachine: (old-k8s-version-968094) Creating domain...
	I0314 19:24:28.397890  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting to get IP...
	I0314 19:24:28.398964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.399389  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.399455  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.399366  993151 retry.go:31] will retry after 254.808358ms: waiting for machine to come up
	I0314 19:24:28.655922  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.656383  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.656414  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.656329  993151 retry.go:31] will retry after 305.278558ms: waiting for machine to come up
	I0314 19:24:28.614866  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:28.618114  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618550  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:28.618595  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618875  992056 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:28.623905  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:28.637729  992056 kubeadm.go:877] updating cluster {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:28.637900  992056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:24:28.637976  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:28.679943  992056 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:24:28.680020  992056 ssh_runner.go:195] Run: which lz4
	I0314 19:24:28.684879  992056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:28.689966  992056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:28.690002  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:24:30.647436  992056 crio.go:444] duration metric: took 1.962590984s to copy over tarball
	I0314 19:24:30.647522  992056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:28.963796  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.964329  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.964360  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.964283  993151 retry.go:31] will retry after 405.241077ms: waiting for machine to come up
	I0314 19:24:29.371107  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.371677  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.371724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.371634  993151 retry.go:31] will retry after 392.618577ms: waiting for machine to come up
	I0314 19:24:29.766406  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.766893  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.766916  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.766848  993151 retry.go:31] will retry after 540.221203ms: waiting for machine to come up
	I0314 19:24:30.308703  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:30.309134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:30.309165  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:30.309075  993151 retry.go:31] will retry after 919.467685ms: waiting for machine to come up
	I0314 19:24:31.230536  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:31.231022  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:31.231055  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:31.230955  993151 retry.go:31] will retry after 1.096403831s: waiting for machine to come up
	I0314 19:24:32.329625  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:32.330123  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:32.330150  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:32.330079  993151 retry.go:31] will retry after 959.221478ms: waiting for machine to come up
	I0314 19:24:33.291448  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:33.291863  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:33.291896  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:33.291811  993151 retry.go:31] will retry after 1.719262878s: waiting for machine to come up
	I0314 19:24:33.418411  992056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.770860454s)
	I0314 19:24:33.418444  992056 crio.go:451] duration metric: took 2.770963996s to extract the tarball
	I0314 19:24:33.418458  992056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:33.461358  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:33.512360  992056 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:24:33.512392  992056 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:24:33.512403  992056 kubeadm.go:928] updating node { 192.168.50.213 8443 v1.28.4 crio true true} ...
	I0314 19:24:33.512647  992056 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-992669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:33.512740  992056 ssh_runner.go:195] Run: crio config
	I0314 19:24:33.572013  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:33.572042  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:33.572058  992056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:33.572089  992056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-992669 NodeName:embed-certs-992669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:24:33.572310  992056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-992669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:33.572391  992056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:24:33.583442  992056 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:33.583514  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:33.593833  992056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0314 19:24:33.611517  992056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:33.630287  992056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0314 19:24:33.649961  992056 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:33.654803  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:33.669018  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:33.787097  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:33.806023  992056 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669 for IP: 192.168.50.213
	I0314 19:24:33.806049  992056 certs.go:194] generating shared ca certs ...
	I0314 19:24:33.806076  992056 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:33.806256  992056 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:33.806310  992056 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:33.806325  992056 certs.go:256] generating profile certs ...
	I0314 19:24:33.806434  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/client.key
	I0314 19:24:33.806536  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key.c0728cf7
	I0314 19:24:33.806597  992056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key
	I0314 19:24:33.806759  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:33.806801  992056 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:33.806815  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:33.806850  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:33.806890  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:33.806919  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:33.806982  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:33.807845  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:33.856253  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:33.912784  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:33.954957  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:33.993293  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 19:24:34.037089  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0314 19:24:34.064883  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:34.091958  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:34.118801  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:24:34.145200  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:24:34.177627  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:34.205768  992056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:24:34.228516  992056 ssh_runner.go:195] Run: openssl version
	I0314 19:24:34.236753  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:24:34.251464  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257801  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257854  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.264945  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:24:34.277068  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:24:34.289085  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294602  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294670  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.301147  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:24:34.313131  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:24:34.324658  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329681  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329741  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.336033  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:24:34.347545  992056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:24:34.352395  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:24:34.358770  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:24:34.364979  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:24:34.371983  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:24:34.378320  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:24:34.385155  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:24:34.392023  992056 kubeadm.go:391] StartCluster: {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:24:34.392123  992056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:24:34.392163  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.431071  992056 cri.go:89] found id: ""
	I0314 19:24:34.431146  992056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:24:34.442517  992056 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:24:34.442537  992056 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:24:34.442543  992056 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:24:34.442591  992056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:24:34.452897  992056 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:24:34.453878  992056 kubeconfig.go:125] found "embed-certs-992669" server: "https://192.168.50.213:8443"
	I0314 19:24:34.456056  992056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:24:34.466222  992056 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.213
	I0314 19:24:34.466280  992056 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:24:34.466297  992056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:24:34.466350  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.514040  992056 cri.go:89] found id: ""
	I0314 19:24:34.514150  992056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:24:34.532904  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:24:34.543553  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:24:34.543572  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:24:34.543621  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:24:34.553476  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:24:34.553537  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:24:34.564032  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:24:34.573782  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:24:34.573880  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:24:34.584510  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.595906  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:24:34.595970  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.610866  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:24:34.623752  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:24:34.623808  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:24:34.634364  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:24:34.645735  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:34.774124  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.518494  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.777109  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.873101  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.991242  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:24:35.991340  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.491712  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.991589  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:37.035324  992056 api_server.go:72] duration metric: took 1.044079871s to wait for apiserver process to appear ...
	I0314 19:24:37.035360  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:24:37.035414  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:37.036045  992056 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0314 19:24:37.535727  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:35.013374  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:35.013750  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:35.013781  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:35.013702  993151 retry.go:31] will retry after 1.413824554s: waiting for machine to come up
	I0314 19:24:36.429118  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:36.429704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:36.429738  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:36.429643  993151 retry.go:31] will retry after 2.349477476s: waiting for machine to come up
	I0314 19:24:40.106309  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.106348  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.106381  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.155310  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.155352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.535833  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.544840  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:40.544869  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.036483  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.049323  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:41.049352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.536465  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.542411  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:24:41.550034  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:24:41.550066  992056 api_server.go:131] duration metric: took 4.514697227s to wait for apiserver health ...
	I0314 19:24:41.550078  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:41.550086  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:41.551967  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:24:41.553380  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:24:41.564892  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:24:41.585838  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:24:41.600993  992056 system_pods.go:59] 8 kube-system pods found
	I0314 19:24:41.601025  992056 system_pods.go:61] "coredns-5dd5756b68-jpsr6" [80728635-786f-442e-80be-811e3292128b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:24:41.601037  992056 system_pods.go:61] "etcd-embed-certs-992669" [4bd7ff48-fe02-4b55-b1f5-cf195efae581] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:24:41.601043  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [2a5f81e9-4943-47d9-a705-e91b802bd506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:24:41.601052  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [50904b48-cbc6-494c-8ed0-ef558f20513c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:24:41.601057  992056 system_pods.go:61] "kube-proxy-nsgs6" [d26d8d3f-04ca-4f68-9016-48552bcdc2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 19:24:41.601062  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [bf535a02-78be-44b0-8ebb-338754867930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:24:41.601067  992056 system_pods.go:61] "metrics-server-57f55c9bc5-w8cj6" [398e104c-24c4-45db-94fb-44188cfa85a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:24:41.601071  992056 system_pods.go:61] "storage-provisioner" [66abcc06-9867-4617-afc1-3fa370940f80] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:24:41.601077  992056 system_pods.go:74] duration metric: took 15.215121ms to wait for pod list to return data ...
	I0314 19:24:41.601085  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:24:41.606110  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:24:41.606135  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:24:41.606146  992056 node_conditions.go:105] duration metric: took 5.056699ms to run NodePressure ...
	I0314 19:24:41.606163  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:41.842508  992056 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850325  992056 kubeadm.go:733] kubelet initialised
	I0314 19:24:41.850344  992056 kubeadm.go:734] duration metric: took 7.804586ms waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850352  992056 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:24:41.857067  992056 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.861933  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861954  992056 pod_ready.go:81] duration metric: took 4.862588ms for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.861963  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861971  992056 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.869015  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869044  992056 pod_ready.go:81] duration metric: took 7.059854ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.869055  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869063  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.877475  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877499  992056 pod_ready.go:81] duration metric: took 8.426466ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.877517  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877525  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.989806  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989836  992056 pod_ready.go:81] duration metric: took 112.302268ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.989846  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989852  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390883  992056 pod_ready.go:92] pod "kube-proxy-nsgs6" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:42.390916  992056 pod_ready.go:81] duration metric: took 401.05393ms for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390929  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:38.781555  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:38.782105  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:38.782134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:38.782060  993151 retry.go:31] will retry after 3.062702235s: waiting for machine to come up
	I0314 19:24:41.846373  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:41.846889  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:41.846928  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:41.846822  993151 retry.go:31] will retry after 3.245094913s: waiting for machine to come up
	I0314 19:24:44.397857  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:46.400091  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:45.093425  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:45.093821  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:45.093848  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:45.093766  993151 retry.go:31] will retry after 4.695140566s: waiting for machine to come up
	I0314 19:24:51.181742  992563 start.go:364] duration metric: took 3m44.440656871s to acquireMachinesLock for "default-k8s-diff-port-440341"
	I0314 19:24:51.181827  992563 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:51.181839  992563 fix.go:54] fixHost starting: 
	I0314 19:24:51.182279  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:51.182325  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:51.202636  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0314 19:24:51.203153  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:51.203703  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:24:51.203732  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:51.204197  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:51.204404  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:24:51.204622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:24:51.206147  992563 fix.go:112] recreateIfNeeded on default-k8s-diff-port-440341: state=Stopped err=<nil>
	I0314 19:24:51.206184  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	W0314 19:24:51.206365  992563 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:51.208359  992563 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-440341" ...
	I0314 19:24:51.209719  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Start
	I0314 19:24:51.209912  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring networks are active...
	I0314 19:24:51.210618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network default is active
	I0314 19:24:51.210996  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network mk-default-k8s-diff-port-440341 is active
	I0314 19:24:51.211386  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Getting domain xml...
	I0314 19:24:51.212126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Creating domain...
	I0314 19:24:49.791977  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792478  992344 main.go:141] libmachine: (old-k8s-version-968094) Found IP for machine: 192.168.72.211
	I0314 19:24:49.792509  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has current primary IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792519  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserving static IP address...
	I0314 19:24:49.792964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.792995  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserved static IP address: 192.168.72.211
	I0314 19:24:49.793028  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | skip adding static IP to network mk-old-k8s-version-968094 - found existing host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"}
	I0314 19:24:49.793049  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Getting to WaitForSSH function...
	I0314 19:24:49.793060  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting for SSH to be available...
	I0314 19:24:49.795809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796119  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.796155  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796340  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH client type: external
	I0314 19:24:49.796365  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa (-rw-------)
	I0314 19:24:49.796399  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:49.796418  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | About to run SSH command:
	I0314 19:24:49.796437  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | exit 0
	I0314 19:24:49.928364  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:49.928849  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:24:49.929565  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:49.932065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932543  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.932575  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932818  992344 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json ...
	I0314 19:24:49.933027  992344 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:49.933049  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:49.933280  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:49.935870  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936260  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.936292  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936447  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:49.936649  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936821  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936940  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:49.937112  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:49.937318  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:49.937331  992344 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:50.053144  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:50.053184  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053461  992344 buildroot.go:166] provisioning hostname "old-k8s-version-968094"
	I0314 19:24:50.053495  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053715  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.056663  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057034  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.057061  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.057486  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057647  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057775  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.057990  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.058167  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.058181  992344 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-968094 && echo "old-k8s-version-968094" | sudo tee /etc/hostname
	I0314 19:24:50.190002  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-968094
	
	I0314 19:24:50.190030  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.192892  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193306  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.193343  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.193825  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194002  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194128  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.194298  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.194472  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.194493  992344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-968094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-968094/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-968094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:50.322939  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:50.322975  992344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:50.323003  992344 buildroot.go:174] setting up certificates
	I0314 19:24:50.323016  992344 provision.go:84] configureAuth start
	I0314 19:24:50.323026  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.323344  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:50.326376  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.326798  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.326827  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.327082  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.329704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.329994  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.330026  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.330131  992344 provision.go:143] copyHostCerts
	I0314 19:24:50.330206  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:50.330223  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:50.330299  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:50.330426  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:50.330435  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:50.330472  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:50.330549  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:50.330560  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:50.330584  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:50.330649  992344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-968094 san=[127.0.0.1 192.168.72.211 localhost minikube old-k8s-version-968094]
	I0314 19:24:50.471374  992344 provision.go:177] copyRemoteCerts
	I0314 19:24:50.471438  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:50.471469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.474223  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474570  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.474608  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474773  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.474969  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.475149  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.475261  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:50.563859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:24:50.593259  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:50.624146  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 19:24:50.651113  992344 provision.go:87] duration metric: took 328.081801ms to configureAuth
	I0314 19:24:50.651158  992344 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:50.651348  992344 config.go:182] Loaded profile config "old-k8s-version-968094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:24:50.651445  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.654716  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.655096  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655328  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.655552  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655730  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655870  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.656012  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.656191  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.656223  992344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:50.925456  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:50.925492  992344 machine.go:97] duration metric: took 992.449429ms to provisionDockerMachine
	I0314 19:24:50.925508  992344 start.go:293] postStartSetup for "old-k8s-version-968094" (driver="kvm2")
	I0314 19:24:50.925518  992344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:50.925535  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:50.925909  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:50.925957  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.928724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929100  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.929124  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929292  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.929469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.929606  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.929718  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.020664  992344 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:51.025418  992344 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:51.025449  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:51.025530  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:51.025642  992344 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:51.025732  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:51.036808  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:51.062597  992344 start.go:296] duration metric: took 137.076655ms for postStartSetup
	I0314 19:24:51.062641  992344 fix.go:56] duration metric: took 23.933315476s for fixHost
	I0314 19:24:51.062667  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.065408  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.065766  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.065809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.066008  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.066241  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066426  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.066751  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:51.066923  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:51.066934  992344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:51.181564  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444291.127685902
	
	I0314 19:24:51.181593  992344 fix.go:216] guest clock: 1710444291.127685902
	I0314 19:24:51.181604  992344 fix.go:229] Guest: 2024-03-14 19:24:51.127685902 +0000 UTC Remote: 2024-03-14 19:24:51.062645814 +0000 UTC m=+257.398231189 (delta=65.040088ms)
	I0314 19:24:51.181630  992344 fix.go:200] guest clock delta is within tolerance: 65.040088ms
	I0314 19:24:51.181636  992344 start.go:83] releasing machines lock for "old-k8s-version-968094", held for 24.052354261s
	I0314 19:24:51.181662  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.181979  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:51.185086  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185444  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.185482  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185683  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186150  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186369  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186475  992344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:51.186530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.186600  992344 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:51.186628  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.189328  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189665  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189739  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.189769  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189909  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190069  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.190091  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.190096  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190278  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190372  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190419  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.190530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190693  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190870  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.273691  992344 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:51.304581  992344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:51.462596  992344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:51.469505  992344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:51.469580  992344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:51.488042  992344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:51.488064  992344 start.go:494] detecting cgroup driver to use...
	I0314 19:24:51.488127  992344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:51.506331  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:51.521263  992344 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:51.521310  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:51.535346  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:51.554784  992344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:51.695072  992344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:51.861752  992344 docker.go:233] disabling docker service ...
	I0314 19:24:51.861822  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:51.886279  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:51.908899  992344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:52.059911  992344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:52.216861  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:52.236554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:52.262549  992344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 19:24:52.262629  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.277311  992344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:52.277405  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.292485  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.307327  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.323517  992344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:52.337431  992344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:52.350647  992344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:52.350744  992344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:52.371679  992344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:52.384810  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:52.540285  992344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:52.710717  992344 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:52.710812  992344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:52.716025  992344 start.go:562] Will wait 60s for crictl version
	I0314 19:24:52.716079  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:52.720670  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:52.760376  992344 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:52.760453  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.795912  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.829365  992344 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 19:24:48.899626  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:50.899777  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:52.830745  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:52.834322  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.834813  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:52.834846  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.835148  992344 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:52.840664  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:52.855935  992344 kubeadm.go:877] updating cluster {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:52.856085  992344 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:24:52.856143  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:52.917316  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:52.917384  992344 ssh_runner.go:195] Run: which lz4
	I0314 19:24:52.923732  992344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:52.929018  992344 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:52.929045  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 19:24:52.555382  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting to get IP...
	I0314 19:24:52.556296  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556767  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556831  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.556746  993275 retry.go:31] will retry after 250.179074ms: waiting for machine to come up
	I0314 19:24:52.808339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.808989  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.809024  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.808935  993275 retry.go:31] will retry after 257.317639ms: waiting for machine to come up
	I0314 19:24:53.068134  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068762  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.068737  993275 retry.go:31] will retry after 427.477171ms: waiting for machine to come up
	I0314 19:24:53.498274  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498751  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498783  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.498710  993275 retry.go:31] will retry after 414.04038ms: waiting for machine to come up
	I0314 19:24:53.914418  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.914970  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.915003  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.914922  993275 retry.go:31] will retry after 698.808984ms: waiting for machine to come up
	I0314 19:24:54.616167  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616671  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616733  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:54.616625  993275 retry.go:31] will retry after 627.573493ms: waiting for machine to come up
	I0314 19:24:55.245579  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246152  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246193  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:55.246077  993275 retry.go:31] will retry after 827.444645ms: waiting for machine to come up
	I0314 19:24:56.075132  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075586  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:56.075577  993275 retry.go:31] will retry after 1.317575549s: waiting for machine to come up
	I0314 19:24:53.400660  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:55.906584  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:56.899301  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:56.899342  992056 pod_ready.go:81] duration metric: took 14.508394033s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:56.899353  992056 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:55.007168  992344 crio.go:444] duration metric: took 2.08347164s to copy over tarball
	I0314 19:24:55.007258  992344 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:58.484792  992344 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.477465904s)
	I0314 19:24:58.484844  992344 crio.go:451] duration metric: took 3.47764437s to extract the tarball
	I0314 19:24:58.484855  992344 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:58.531628  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:58.586436  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:58.586467  992344 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.586644  992344 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.586686  992344 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.586732  992344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.586795  992344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588701  992344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.588708  992344 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 19:24:58.588712  992344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.588743  992344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.588773  992344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.588717  992344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:57.395510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.395966  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.396012  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:57.395926  993275 retry.go:31] will retry after 1.349742787s: waiting for machine to come up
	I0314 19:24:58.747273  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747764  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:58.747711  993275 retry.go:31] will retry after 1.715984886s: waiting for machine to come up
	I0314 19:25:00.465630  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466197  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:00.466159  993275 retry.go:31] will retry after 2.291989797s: waiting for machine to come up
	I0314 19:24:58.949160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:01.407335  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:58.745061  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.748854  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.755753  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.757595  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.776672  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.785641  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.803868  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 19:24:58.878866  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:59.049142  992344 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 19:24:59.049192  992344 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 19:24:59.049238  992344 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 19:24:59.049245  992344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 19:24:59.049206  992344 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.049275  992344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.049297  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058360  992344 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 19:24:59.058394  992344 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.058429  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058471  992344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 19:24:59.058508  992344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.058530  992344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 19:24:59.058550  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058560  992344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.058580  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058506  992344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 19:24:59.058620  992344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.058668  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.179879  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 19:24:59.179903  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.179964  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.180018  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.180048  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.180057  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.180158  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.353654  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 19:24:59.353726  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 19:24:59.353834  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 19:24:59.353886  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 19:24:59.353951  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 19:24:59.353992  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 19:24:59.356778  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 19:24:59.356828  992344 cache_images.go:92] duration metric: took 770.342451ms to LoadCachedImages
	W0314 19:24:59.356913  992344 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0314 19:24:59.356940  992344 kubeadm.go:928] updating node { 192.168.72.211 8443 v1.20.0 crio true true} ...
	I0314 19:24:59.357079  992344 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-968094 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:59.357158  992344 ssh_runner.go:195] Run: crio config
	I0314 19:24:59.412340  992344 cni.go:84] Creating CNI manager for ""
	I0314 19:24:59.412369  992344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:59.412383  992344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:59.412401  992344 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-968094 NodeName:old-k8s-version-968094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 19:24:59.412538  992344 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-968094"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:59.412599  992344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 19:24:59.424508  992344 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:59.424568  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:59.435744  992344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0314 19:24:59.456291  992344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:59.476542  992344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0314 19:24:59.496114  992344 ssh_runner.go:195] Run: grep 192.168.72.211	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:59.500824  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:59.515178  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:59.658035  992344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:59.677735  992344 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094 for IP: 192.168.72.211
	I0314 19:24:59.677764  992344 certs.go:194] generating shared ca certs ...
	I0314 19:24:59.677788  992344 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:59.677986  992344 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:59.678055  992344 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:59.678073  992344 certs.go:256] generating profile certs ...
	I0314 19:24:59.678209  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key
	I0314 19:24:59.678288  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff
	I0314 19:24:59.678358  992344 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key
	I0314 19:24:59.678538  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:59.678589  992344 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:59.678602  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:59.678684  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:59.678751  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:59.678787  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:59.678858  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:59.679859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:59.720965  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:59.758643  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:59.791205  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:59.832034  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 19:24:59.864634  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:24:59.912167  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:59.941168  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:59.969896  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:59.998999  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:00.029688  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:00.062406  992344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:00.083876  992344 ssh_runner.go:195] Run: openssl version
	I0314 19:25:00.091083  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:00.104196  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110057  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110152  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.117863  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:00.130915  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:00.144184  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149849  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149905  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.156267  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:00.168884  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:00.181228  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186741  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186815  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.193408  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:00.206565  992344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:00.211955  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:00.218803  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:00.226004  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:00.233071  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:00.239998  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:00.246935  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:00.253650  992344 kubeadm.go:391] StartCluster: {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:00.253770  992344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:00.253810  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.296620  992344 cri.go:89] found id: ""
	I0314 19:25:00.296698  992344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:00.308438  992344 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:00.308468  992344 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:00.308474  992344 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:00.308525  992344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:00.319200  992344 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:00.320258  992344 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-968094" does not appear in /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:25:00.320949  992344 kubeconfig.go:62] /home/jenkins/minikube-integration/18384-942544/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-968094" cluster setting kubeconfig missing "old-k8s-version-968094" context setting]
	I0314 19:25:00.321954  992344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:00.323826  992344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:00.334959  992344 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.211
	I0314 19:25:00.334999  992344 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:00.335015  992344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:00.335094  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.382418  992344 cri.go:89] found id: ""
	I0314 19:25:00.382504  992344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:00.400714  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:00.411916  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:00.411941  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:00.412000  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:00.421737  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:00.421786  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:00.431760  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:00.441154  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:00.441196  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:00.450820  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.460234  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:00.460286  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.470870  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:00.480352  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:00.480410  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:00.490282  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:00.500774  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:00.627719  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.640607  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.012840431s)
	I0314 19:25:01.640641  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.916817  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.028420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.119081  992344 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:02.119190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.619675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.119328  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.620344  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.761090  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761739  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:02.761611  993275 retry.go:31] will retry after 3.350017146s: waiting for machine to come up
	I0314 19:25:06.113637  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114139  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:06.114067  993275 retry.go:31] will retry after 2.99017798s: waiting for machine to come up
	I0314 19:25:03.407892  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:05.907001  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:04.120088  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:04.619514  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.119530  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.119991  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.619382  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.119301  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.620072  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.119582  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.619828  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.105563  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106118  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106171  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:09.105987  993275 retry.go:31] will retry after 5.42931998s: waiting for machine to come up
	I0314 19:25:08.406736  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:10.906160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:09.119659  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.619483  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.119624  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.619745  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.120056  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.619647  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.120231  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.619400  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.120340  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.620046  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.061441  991880 start.go:364] duration metric: took 1m4.63836278s to acquireMachinesLock for "no-preload-731976"
	I0314 19:25:16.061504  991880 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:25:16.061513  991880 fix.go:54] fixHost starting: 
	I0314 19:25:16.061978  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:25:16.062021  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:25:16.079752  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0314 19:25:16.080283  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:25:16.080930  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:25:16.080964  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:25:16.081279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:25:16.081477  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:16.081630  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:25:16.083170  991880 fix.go:112] recreateIfNeeded on no-preload-731976: state=Stopped err=<nil>
	I0314 19:25:16.083196  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	W0314 19:25:16.083368  991880 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:25:16.085486  991880 out.go:177] * Restarting existing kvm2 VM for "no-preload-731976" ...
	I0314 19:25:14.539116  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has current primary IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Found IP for machine: 192.168.61.88
	I0314 19:25:14.539650  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserving static IP address...
	I0314 19:25:14.540057  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.540081  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserved static IP address: 192.168.61.88
	I0314 19:25:14.540105  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | skip adding static IP to network mk-default-k8s-diff-port-440341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"}
	I0314 19:25:14.540126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Getting to WaitForSSH function...
	I0314 19:25:14.540172  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for SSH to be available...
	I0314 19:25:14.542249  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542558  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.542594  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542722  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH client type: external
	I0314 19:25:14.542755  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa (-rw-------)
	I0314 19:25:14.542793  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:14.542810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | About to run SSH command:
	I0314 19:25:14.542841  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | exit 0
	I0314 19:25:14.668392  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:14.668820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetConfigRaw
	I0314 19:25:14.669583  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:14.672181  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672581  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.672622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672861  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:25:14.673049  992563 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:14.673069  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:14.673317  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.675826  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676173  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.676204  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.676547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676702  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.676969  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.677212  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.677229  992563 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:14.780979  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:14.781005  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781243  992563 buildroot.go:166] provisioning hostname "default-k8s-diff-port-440341"
	I0314 19:25:14.781272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781508  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.784454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.784868  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.784897  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.785044  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.785241  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785410  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.785731  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.786010  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.786038  992563 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-440341 && echo "default-k8s-diff-port-440341" | sudo tee /etc/hostname
	I0314 19:25:14.904629  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-440341
	
	I0314 19:25:14.904677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.907677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908043  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.908065  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908308  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.908510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908709  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908895  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.909075  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.909242  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.909260  992563 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-440341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-440341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-440341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:15.027592  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:15.027627  992563 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:15.027663  992563 buildroot.go:174] setting up certificates
	I0314 19:25:15.027676  992563 provision.go:84] configureAuth start
	I0314 19:25:15.027686  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:15.027992  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:15.031259  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031691  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.031723  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031839  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.034341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034690  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.034727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034882  992563 provision.go:143] copyHostCerts
	I0314 19:25:15.034957  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:15.034974  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:15.035032  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:15.035117  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:15.035126  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:15.035150  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:15.035219  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:15.035240  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:15.035276  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:15.035368  992563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-440341 san=[127.0.0.1 192.168.61.88 default-k8s-diff-port-440341 localhost minikube]
	I0314 19:25:15.366505  992563 provision.go:177] copyRemoteCerts
	I0314 19:25:15.366572  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:15.366601  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.369547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.369931  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.369968  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.370178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.370389  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.370559  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.370668  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.451879  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:15.479025  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 19:25:15.505498  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:15.531616  992563 provision.go:87] duration metric: took 503.926667ms to configureAuth
	I0314 19:25:15.531643  992563 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:15.531808  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:25:15.531887  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.534449  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534774  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.534805  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534957  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.535182  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535344  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535479  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.535660  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.535863  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.535895  992563 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:15.820304  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:15.820329  992563 machine.go:97] duration metric: took 1.147267075s to provisionDockerMachine
	I0314 19:25:15.820361  992563 start.go:293] postStartSetup for "default-k8s-diff-port-440341" (driver="kvm2")
	I0314 19:25:15.820373  992563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:15.820407  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:15.820799  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:15.820845  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.823575  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.823941  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.823987  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.824114  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.824357  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.824550  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.824671  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.908341  992563 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:15.913846  992563 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:15.913876  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:15.913955  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:15.914034  992563 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:15.914122  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:15.925105  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:15.954205  992563 start.go:296] duration metric: took 133.827027ms for postStartSetup
	I0314 19:25:15.954258  992563 fix.go:56] duration metric: took 24.772420326s for fixHost
	I0314 19:25:15.954282  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.957262  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957609  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.957635  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957844  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.958095  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.958685  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.958877  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.958890  992563 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:16.061284  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444316.007193080
	
	I0314 19:25:16.061311  992563 fix.go:216] guest clock: 1710444316.007193080
	I0314 19:25:16.061318  992563 fix.go:229] Guest: 2024-03-14 19:25:16.00719308 +0000 UTC Remote: 2024-03-14 19:25:15.954262263 +0000 UTC m=+249.360732976 (delta=52.930817ms)
	I0314 19:25:16.061337  992563 fix.go:200] guest clock delta is within tolerance: 52.930817ms
	I0314 19:25:16.061342  992563 start.go:83] releasing machines lock for "default-k8s-diff-port-440341", held for 24.879556185s
	I0314 19:25:16.061371  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.061696  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:16.064827  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065187  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.065222  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065419  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.065929  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066138  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066251  992563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:16.066313  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.066422  992563 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:16.066451  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.069082  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069202  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069488  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069518  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069624  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069659  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.069727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069881  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.069946  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.070091  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070106  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.070265  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.070283  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070420  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.149620  992563 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:16.178081  992563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:16.329236  992563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:16.337073  992563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:16.337165  992563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:16.364829  992563 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:16.364860  992563 start.go:494] detecting cgroup driver to use...
	I0314 19:25:16.364950  992563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:16.381277  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:16.396677  992563 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:16.396790  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:16.415438  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:16.434001  992563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:16.557750  992563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:16.705623  992563 docker.go:233] disabling docker service ...
	I0314 19:25:16.705722  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:16.724795  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:16.740336  992563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:16.886850  992563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:17.053349  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:17.069592  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:17.094552  992563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:17.094625  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.110947  992563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:17.111007  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.126320  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.146601  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.159826  992563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:17.173155  992563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:17.184494  992563 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:17.184558  992563 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:17.208695  992563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:17.227381  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:17.368355  992563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:17.520886  992563 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:17.520974  992563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:17.526580  992563 start.go:562] Will wait 60s for crictl version
	I0314 19:25:17.526628  992563 ssh_runner.go:195] Run: which crictl
	I0314 19:25:17.531219  992563 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:17.575983  992563 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:17.576094  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.609997  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.649005  992563 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:25:13.406397  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:15.407636  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:17.409791  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:14.119937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:14.619997  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.120018  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.620272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.119409  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.619421  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.120049  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.619392  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.120272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.619832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.086761  991880 main.go:141] libmachine: (no-preload-731976) Calling .Start
	I0314 19:25:16.086939  991880 main.go:141] libmachine: (no-preload-731976) Ensuring networks are active...
	I0314 19:25:16.087657  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network default is active
	I0314 19:25:16.088038  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network mk-no-preload-731976 is active
	I0314 19:25:16.088466  991880 main.go:141] libmachine: (no-preload-731976) Getting domain xml...
	I0314 19:25:16.089244  991880 main.go:141] libmachine: (no-preload-731976) Creating domain...
	I0314 19:25:17.372280  991880 main.go:141] libmachine: (no-preload-731976) Waiting to get IP...
	I0314 19:25:17.373197  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.373612  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.373682  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.373595  993471 retry.go:31] will retry after 247.546207ms: waiting for machine to come up
	I0314 19:25:17.622973  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.623491  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.623521  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.623426  993471 retry.go:31] will retry after 340.11253ms: waiting for machine to come up
	I0314 19:25:17.964912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.965367  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.965409  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.965326  993471 retry.go:31] will retry after 467.934923ms: waiting for machine to come up
	I0314 19:25:18.434872  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.435488  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.435532  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.435428  993471 retry.go:31] will retry after 407.906998ms: waiting for machine to come up
	I0314 19:25:18.845093  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.845593  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.845624  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.845538  993471 retry.go:31] will retry after 461.594471ms: waiting for machine to come up
	I0314 19:25:17.650252  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:17.653280  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:17.653706  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653907  992563 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:17.660311  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:17.676122  992563 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:17.676277  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:25:17.676348  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:17.718920  992563 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:25:17.718999  992563 ssh_runner.go:195] Run: which lz4
	I0314 19:25:17.724064  992563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:25:17.729236  992563 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:25:17.729268  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:25:19.779405  992563 crio.go:444] duration metric: took 2.055391829s to copy over tarball
	I0314 19:25:19.779494  992563 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:25:19.411247  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:21.911525  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:19.120147  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.619419  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.119333  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.620029  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.119402  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.620236  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.119692  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.120125  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.620104  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.309335  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.309861  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.309892  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.309812  993471 retry.go:31] will retry after 629.96532ms: waiting for machine to come up
	I0314 19:25:19.941554  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.942052  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.942086  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.942018  993471 retry.go:31] will retry after 1.025753706s: waiting for machine to come up
	I0314 19:25:20.969178  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:20.969734  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:20.969775  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:20.969671  993471 retry.go:31] will retry after 1.02702661s: waiting for machine to come up
	I0314 19:25:21.998485  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:21.999019  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:21.999054  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:21.998955  993471 retry.go:31] will retry after 1.463514327s: waiting for machine to come up
	I0314 19:25:23.464556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:23.465087  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:23.465123  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:23.465035  993471 retry.go:31] will retry after 2.155372334s: waiting for machine to come up
	I0314 19:25:22.861284  992563 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081750952s)
	I0314 19:25:22.861324  992563 crio.go:451] duration metric: took 3.081885026s to extract the tarball
	I0314 19:25:22.861335  992563 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:25:22.907763  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:22.962568  992563 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:25:22.962593  992563 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:25:22.962602  992563 kubeadm.go:928] updating node { 192.168.61.88 8444 v1.28.4 crio true true} ...
	I0314 19:25:22.962756  992563 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-440341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:22.962851  992563 ssh_runner.go:195] Run: crio config
	I0314 19:25:23.020057  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:23.020092  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:23.020109  992563 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:23.020150  992563 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.88 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-440341 NodeName:default-k8s-diff-port-440341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:23.020354  992563 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.88
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-440341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:23.020441  992563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:25:23.031259  992563 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:23.031351  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:23.041703  992563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0314 19:25:23.061055  992563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:25:23.084905  992563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0314 19:25:23.108282  992563 ssh_runner.go:195] Run: grep 192.168.61.88	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:23.114097  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:23.134147  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:23.261318  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:23.280454  992563 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341 for IP: 192.168.61.88
	I0314 19:25:23.280483  992563 certs.go:194] generating shared ca certs ...
	I0314 19:25:23.280506  992563 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:23.280675  992563 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:23.280739  992563 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:23.280753  992563 certs.go:256] generating profile certs ...
	I0314 19:25:23.280872  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.key
	I0314 19:25:23.280971  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key.a3c32cf7
	I0314 19:25:23.281038  992563 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key
	I0314 19:25:23.281177  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:23.281219  992563 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:23.281232  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:23.281268  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:23.281300  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:23.281333  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:23.281389  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:23.282304  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:23.351284  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:23.402835  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:23.435934  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:23.467188  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 19:25:23.499760  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:23.528544  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:23.556740  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:23.584404  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:23.615693  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:23.643349  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:23.671793  992563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:23.692766  992563 ssh_runner.go:195] Run: openssl version
	I0314 19:25:23.699459  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:23.711735  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717022  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717078  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.723658  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:23.735141  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:23.746833  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753783  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753855  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.760817  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:23.772826  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:23.784241  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789107  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789170  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.795406  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:23.806969  992563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:23.811875  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:23.818337  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:23.826885  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:23.835278  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:23.843419  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:23.851515  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:23.860074  992563 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:23.860169  992563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:23.860241  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.902985  992563 cri.go:89] found id: ""
	I0314 19:25:23.903065  992563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:23.915686  992563 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:23.915711  992563 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:23.915718  992563 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:23.915776  992563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:23.926246  992563 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:23.927336  992563 kubeconfig.go:125] found "default-k8s-diff-port-440341" server: "https://192.168.61.88:8444"
	I0314 19:25:23.929693  992563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:23.940022  992563 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.88
	I0314 19:25:23.940053  992563 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:23.940067  992563 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:23.940135  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.982828  992563 cri.go:89] found id: ""
	I0314 19:25:23.982911  992563 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:24.001146  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:24.014973  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:24.015016  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:24.015069  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:25:24.024883  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:24.024954  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:24.034932  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:25:24.044680  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:24.044737  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:24.054865  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.064375  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:24.064440  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.075503  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:25:24.085139  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:24.085181  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:24.096092  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:24.106907  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.238605  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.990111  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.246192  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.325019  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.458340  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:25.458512  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.959178  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.459441  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.543678  992563 api_server.go:72] duration metric: took 1.085336822s to wait for apiserver process to appear ...
	I0314 19:25:26.543708  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:26.543734  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:26.544332  992563 api_server.go:269] stopped: https://192.168.61.88:8444/healthz: Get "https://192.168.61.88:8444/healthz": dial tcp 192.168.61.88:8444: connect: connection refused
	I0314 19:25:24.407953  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:26.408497  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:24.119417  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:24.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.120173  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.619362  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.119366  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.619644  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.119516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.619418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.120115  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.619593  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.621639  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:25.622189  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:25.622224  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:25.622129  993471 retry.go:31] will retry after 2.47317901s: waiting for machine to come up
	I0314 19:25:28.097250  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:28.097610  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:28.097640  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:28.097554  993471 retry.go:31] will retry after 2.923437953s: waiting for machine to come up
	I0314 19:25:27.044437  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.729256  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.729296  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:29.729321  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.752124  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.752162  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:30.044560  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.049804  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.049846  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:30.544454  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.558197  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.558237  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.043868  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.050468  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:31.050497  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.544657  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.549640  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:25:31.561049  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:25:31.561080  992563 api_server.go:131] duration metric: took 5.017362991s to wait for apiserver health ...
	I0314 19:25:31.561091  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:31.561101  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:31.563012  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:31.564434  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:25:31.594766  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:25:31.618252  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:25:31.632693  992563 system_pods.go:59] 8 kube-system pods found
	I0314 19:25:31.632743  992563 system_pods.go:61] "coredns-5dd5756b68-bkfks" [c4bc8ea9-9a0f-43df-9916-a9a7e42fc4e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:25:31.632752  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [936bfbcb-333a-45db-9cd1-b152c14bc623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:25:31.632758  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [0533b8e8-e66a-4f38-8d55-c813446a4406] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:25:31.632768  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [b31998b9-b575-430b-918e-b9c4a7c626d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:25:31.632773  992563 system_pods.go:61] "kube-proxy-249fd" [f8bafea7-bc78-4e48-ad55-3b913c3e2fd1] Running
	I0314 19:25:31.632778  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [99c2fc5a-61a5-4813-9042-dac771932708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:25:31.632786  992563 system_pods.go:61] "metrics-server-57f55c9bc5-t2hhv" [03b6608b-bea1-4605-b85d-c09f2c744118] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:25:31.632801  992563 system_pods.go:61] "storage-provisioner" [ec6d3122-f9c6-4f14-bc66-7cab18b88fa5] Running
	I0314 19:25:31.632811  992563 system_pods.go:74] duration metric: took 14.536847ms to wait for pod list to return data ...
	I0314 19:25:31.632818  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:25:31.636580  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:25:31.636606  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:25:31.636618  992563 node_conditions.go:105] duration metric: took 3.793367ms to run NodePressure ...
	I0314 19:25:31.636635  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:28.907100  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:30.908031  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:29.119861  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:29.620287  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.120113  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.619452  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.120315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.619667  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.120221  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.120292  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.619449  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.022404  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:31.022914  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:31.022950  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:31.022850  993471 retry.go:31] will retry after 4.138449888s: waiting for machine to come up
	I0314 19:25:31.874889  992563 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879729  992563 kubeadm.go:733] kubelet initialised
	I0314 19:25:31.879757  992563 kubeadm.go:734] duration metric: took 4.834353ms waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879768  992563 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:25:31.884949  992563 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.890443  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890467  992563 pod_ready.go:81] duration metric: took 5.495766ms for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.890475  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890485  992563 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.895241  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895275  992563 pod_ready.go:81] duration metric: took 4.778217ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.895289  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895300  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.900184  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900204  992563 pod_ready.go:81] duration metric: took 4.895049ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.900222  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900228  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.023193  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023224  992563 pod_ready.go:81] duration metric: took 122.987086ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:32.023236  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023242  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423939  992563 pod_ready.go:92] pod "kube-proxy-249fd" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:32.423972  992563 pod_ready.go:81] duration metric: took 400.720648ms for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423988  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:34.431140  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:36.432871  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:33.408652  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.906834  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:37.914444  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.165792  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166342  991880 main.go:141] libmachine: (no-preload-731976) Found IP for machine: 192.168.39.148
	I0314 19:25:35.166372  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has current primary IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166382  991880 main.go:141] libmachine: (no-preload-731976) Reserving static IP address...
	I0314 19:25:35.166707  991880 main.go:141] libmachine: (no-preload-731976) Reserved static IP address: 192.168.39.148
	I0314 19:25:35.166727  991880 main.go:141] libmachine: (no-preload-731976) Waiting for SSH to be available...
	I0314 19:25:35.166748  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.166781  991880 main.go:141] libmachine: (no-preload-731976) DBG | skip adding static IP to network mk-no-preload-731976 - found existing host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"}
	I0314 19:25:35.166800  991880 main.go:141] libmachine: (no-preload-731976) DBG | Getting to WaitForSSH function...
	I0314 19:25:35.169377  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169760  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.169795  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169926  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH client type: external
	I0314 19:25:35.169960  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa (-rw-------)
	I0314 19:25:35.169998  991880 main.go:141] libmachine: (no-preload-731976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:35.170022  991880 main.go:141] libmachine: (no-preload-731976) DBG | About to run SSH command:
	I0314 19:25:35.170036  991880 main.go:141] libmachine: (no-preload-731976) DBG | exit 0
	I0314 19:25:35.296417  991880 main.go:141] libmachine: (no-preload-731976) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:35.296801  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetConfigRaw
	I0314 19:25:35.297596  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.300253  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300720  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.300757  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300996  991880 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/config.json ...
	I0314 19:25:35.301205  991880 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:35.301229  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:35.301493  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.304165  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304600  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.304643  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304850  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.305119  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305292  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305468  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.305627  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.305863  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.305881  991880 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:35.421933  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:35.421969  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422269  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:25:35.422303  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422516  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.425530  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426039  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.426069  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.426476  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426646  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426807  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.426997  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.427179  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.427200  991880 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-731976 && echo "no-preload-731976" | sudo tee /etc/hostname
	I0314 19:25:35.558170  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-731976
	
	I0314 19:25:35.558216  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.561575  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562028  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.562059  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562372  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.562673  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.562874  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.563059  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.563234  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.563468  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.563495  991880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-731976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-731976/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-731976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:35.691282  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:35.691321  991880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:35.691412  991880 buildroot.go:174] setting up certificates
	I0314 19:25:35.691437  991880 provision.go:84] configureAuth start
	I0314 19:25:35.691454  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.691821  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.694807  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695223  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.695255  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695385  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.698118  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698519  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.698548  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698752  991880 provision.go:143] copyHostCerts
	I0314 19:25:35.698834  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:35.698872  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:35.698922  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:35.699019  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:35.699030  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:35.699051  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:35.699114  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:35.699156  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:35.699177  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:35.699240  991880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.no-preload-731976 san=[127.0.0.1 192.168.39.148 localhost minikube no-preload-731976]
	I0314 19:25:35.915177  991880 provision.go:177] copyRemoteCerts
	I0314 19:25:35.915240  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:35.915265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.918112  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918468  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.918499  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918607  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.918813  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.918989  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.919161  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.003712  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 19:25:36.037023  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:36.068063  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:36.101448  991880 provision.go:87] duration metric: took 409.997228ms to configureAuth
	I0314 19:25:36.101475  991880 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:36.101691  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:25:36.101783  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.104700  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.105138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105310  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.105536  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105733  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105885  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.106088  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.106325  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.106345  991880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:36.387809  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:36.387841  991880 machine.go:97] duration metric: took 1.086620225s to provisionDockerMachine
	I0314 19:25:36.387855  991880 start.go:293] postStartSetup for "no-preload-731976" (driver="kvm2")
	I0314 19:25:36.387869  991880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:36.387886  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.388286  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:36.388316  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.391292  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391742  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.391774  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391959  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.392203  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.392450  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.392637  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.477050  991880 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:36.482184  991880 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:36.482205  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:36.482270  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:36.482372  991880 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:36.482459  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:36.492716  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:36.520655  991880 start.go:296] duration metric: took 132.783495ms for postStartSetup
	I0314 19:25:36.520723  991880 fix.go:56] duration metric: took 20.459188473s for fixHost
	I0314 19:25:36.520761  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.523718  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.524138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524431  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.524648  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.524842  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.525031  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.525211  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.525425  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.525436  991880 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:36.633356  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444336.610892497
	
	I0314 19:25:36.633389  991880 fix.go:216] guest clock: 1710444336.610892497
	I0314 19:25:36.633400  991880 fix.go:229] Guest: 2024-03-14 19:25:36.610892497 +0000 UTC Remote: 2024-03-14 19:25:36.520738659 +0000 UTC m=+367.687364006 (delta=90.153838ms)
	I0314 19:25:36.633445  991880 fix.go:200] guest clock delta is within tolerance: 90.153838ms
	I0314 19:25:36.633457  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 20.57197992s
	I0314 19:25:36.633490  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.633778  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:36.636556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.636959  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.636991  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.637190  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637708  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637871  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637934  991880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:36.638009  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.638075  991880 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:36.638104  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.640821  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641207  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641236  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641272  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641489  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.641668  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641863  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641961  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.642002  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.642188  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.642394  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.642606  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.753962  991880 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:36.761020  991880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:36.916046  991880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:36.923607  991880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:36.923688  991880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:36.941685  991880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:36.941710  991880 start.go:494] detecting cgroup driver to use...
	I0314 19:25:36.941776  991880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:36.962019  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:36.977917  991880 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:36.977982  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:36.995378  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:37.010859  991880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:37.145828  991880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:37.310805  991880 docker.go:233] disabling docker service ...
	I0314 19:25:37.310893  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:37.327346  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:37.342143  991880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:37.485925  991880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:37.607814  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:37.623068  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:37.644387  991880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:37.644455  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.655919  991880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:37.655992  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.669290  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.681601  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.694022  991880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:37.705793  991880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:37.716260  991880 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:37.716307  991880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:37.732112  991880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:37.749555  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:37.868548  991880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:38.023735  991880 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:38.023821  991880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:38.029414  991880 start.go:562] Will wait 60s for crictl version
	I0314 19:25:38.029481  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.033985  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:38.077012  991880 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:38.077102  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.109155  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.146003  991880 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 19:25:34.119724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:34.620261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.119543  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.620151  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.119893  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.619442  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.119326  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.619427  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.119766  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.619711  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.147344  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:38.149841  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150180  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:38.150217  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150608  991880 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:38.155598  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:38.172048  991880 kubeadm.go:877] updating cluster {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:38.172187  991880 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 19:25:38.172260  991880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:38.220190  991880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 19:25:38.220232  991880 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:25:38.220291  991880 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.220313  991880 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.220345  991880 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.220378  991880 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 19:25:38.220395  991880 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.220486  991880 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.220484  991880 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.220724  991880 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.221960  991880 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 19:25:38.222035  991880 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.222177  991880 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.222230  991880 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.222271  991880 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.222272  991880 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.372514  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 19:25:38.384051  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.388330  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.395017  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.397902  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.409638  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.431681  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.501339  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.590670  991880 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 19:25:38.590775  991880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 19:25:38.590838  991880 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 19:25:38.590853  991880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.590860  991880 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590797  991880 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.591036  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627618  991880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 19:25:38.627667  991880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.627716  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627732  991880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 19:25:38.627769  991880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.627826  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648107  991880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 19:25:38.648128  991880 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648279  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.648277  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.648335  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.648346  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.648374  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.783957  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 19:25:38.784024  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.784071  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.784097  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.784197  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.788609  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788695  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788719  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 19:25:38.788782  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.788797  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:38.788856  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.788931  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.849452  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 19:25:38.849488  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 19:25:38.849505  991880 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849554  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849563  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:38.849617  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 19:25:38.849624  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.849552  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849645  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849672  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849739  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.854753  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 19:25:38.933788  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.435999  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:39.436021  992563 pod_ready.go:81] duration metric: took 7.012025071s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:39.436031  992563 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:41.445517  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:40.407508  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:42.907630  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.120157  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:39.620116  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.119693  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.120192  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.619323  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.119637  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.619724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.619799  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.952723  991880 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (4.102947708s)
	I0314 19:25:42.952761  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 19:25:42.952762  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103172862s)
	I0314 19:25:42.952791  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 19:25:42.952821  991880 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:42.952878  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:43.943582  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.945997  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.407780  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:44.119609  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:44.619260  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.119599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.619665  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.120008  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.619297  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.119435  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.619512  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.119521  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.619320  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.022375  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.069465444s)
	I0314 19:25:45.022413  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 19:25:45.022458  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:45.022539  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:48.091412  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.068839048s)
	I0314 19:25:48.091449  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 19:25:48.091482  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.091536  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.451322  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.944057  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:47.957506  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.408381  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:52.906494  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:49.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.619796  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.120279  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.619408  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.120076  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.619516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.119566  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.620268  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.120329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.619847  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.657504  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.565934426s)
	I0314 19:25:49.657542  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 19:25:49.657578  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:49.657646  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:52.134720  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477038469s)
	I0314 19:25:52.134760  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 19:25:52.134794  991880 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:52.134888  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:53.095193  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 19:25:53.095258  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.095337  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.447791  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:55.944276  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.907556  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:57.406376  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.119981  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.620180  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.119616  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.619375  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.119240  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.619922  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.120288  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.119329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.620315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.949310  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.853945012s)
	I0314 19:25:54.949346  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 19:25:54.949374  991880 cache_images.go:123] Successfully loaded all cached images
	I0314 19:25:54.949385  991880 cache_images.go:92] duration metric: took 16.729134981s to LoadCachedImages
	I0314 19:25:54.949398  991880 kubeadm.go:928] updating node { 192.168.39.148 8443 v1.29.0-rc.2 crio true true} ...
	I0314 19:25:54.949542  991880 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-731976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:54.949667  991880 ssh_runner.go:195] Run: crio config
	I0314 19:25:55.001838  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:25:55.001869  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:55.001885  991880 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:55.001916  991880 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.148 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-731976 NodeName:no-preload-731976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:55.002121  991880 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-731976"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:55.002212  991880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 19:25:55.014769  991880 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:55.014842  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:55.026082  991880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 19:25:55.049071  991880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 19:25:55.071131  991880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 19:25:55.093566  991880 ssh_runner.go:195] Run: grep 192.168.39.148	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:55.098332  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:55.113424  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:55.260159  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:55.283145  991880 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976 for IP: 192.168.39.148
	I0314 19:25:55.283174  991880 certs.go:194] generating shared ca certs ...
	I0314 19:25:55.283197  991880 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:55.283377  991880 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:55.283441  991880 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:55.283455  991880 certs.go:256] generating profile certs ...
	I0314 19:25:55.283564  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.key
	I0314 19:25:55.283661  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key.5587cb42
	I0314 19:25:55.283720  991880 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key
	I0314 19:25:55.283895  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:55.283948  991880 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:55.283962  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:55.283993  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:55.284031  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:55.284066  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:55.284121  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:55.284976  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:55.326779  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:55.376167  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:55.405828  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:55.458807  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 19:25:55.494051  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:55.531015  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:55.559184  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:55.588905  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:55.616661  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:55.646728  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:55.673995  991880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:55.692276  991880 ssh_runner.go:195] Run: openssl version
	I0314 19:25:55.698918  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:55.711703  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717107  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717177  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.723435  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:55.736575  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:55.749982  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755614  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755680  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.762122  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:55.774447  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:55.786787  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791855  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791901  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.798041  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:55.810324  991880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:55.815698  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:55.822389  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:55.829046  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:55.835660  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:55.843075  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:55.849353  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:55.855678  991880 kubeadm.go:391] StartCluster: {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:55.855799  991880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:55.855834  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.906341  991880 cri.go:89] found id: ""
	I0314 19:25:55.906408  991880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:55.918790  991880 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:55.918819  991880 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:55.918826  991880 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:55.918875  991880 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:55.929988  991880 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:55.931422  991880 kubeconfig.go:125] found "no-preload-731976" server: "https://192.168.39.148:8443"
	I0314 19:25:55.933865  991880 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:55.946711  991880 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.148
	I0314 19:25:55.946743  991880 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:55.946757  991880 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:55.946812  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.998884  991880 cri.go:89] found id: ""
	I0314 19:25:55.998971  991880 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:56.018919  991880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:56.030467  991880 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:56.030497  991880 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:56.030558  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:56.041403  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:56.041465  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:56.052140  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:56.062366  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:56.062420  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:56.075847  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.086246  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:56.086295  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.097148  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:56.106718  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:56.106756  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:56.118337  991880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:56.131893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:56.264399  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.282634  991880 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.018196302s)
	I0314 19:25:57.282664  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.524172  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.626554  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.772151  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:57.772255  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.273445  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.772397  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.803788  991880 api_server.go:72] duration metric: took 1.031637073s to wait for apiserver process to appear ...
	I0314 19:25:58.803816  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:58.803835  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:25:58.804392  991880 api_server.go:269] stopped: https://192.168.39.148:8443/healthz: Get "https://192.168.39.148:8443/healthz": dial tcp 192.168.39.148:8443: connect: connection refused
	I0314 19:25:58.445134  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:00.447429  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.304059  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.588183  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.588231  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.588251  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.632993  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.633030  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.804404  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.862306  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:01.862370  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.304525  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.309902  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:02.309933  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.804296  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.812245  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:26:02.830235  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:26:02.830268  991880 api_server.go:131] duration metric: took 4.026443836s to wait for apiserver health ...
	I0314 19:26:02.830281  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:26:02.830289  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:26:02.832051  991880 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:59.407314  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:01.906570  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.120306  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:59.620183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.119877  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.619283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.119314  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.620175  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:02.120113  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:02.120198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:02.173354  992344 cri.go:89] found id: ""
	I0314 19:26:02.173388  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.173421  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:02.173430  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:02.173509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:02.213519  992344 cri.go:89] found id: ""
	I0314 19:26:02.213555  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.213567  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:02.213574  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:02.213689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:02.259387  992344 cri.go:89] found id: ""
	I0314 19:26:02.259423  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.259435  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:02.259443  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:02.259511  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:02.308335  992344 cri.go:89] found id: ""
	I0314 19:26:02.308362  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.308373  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:02.308381  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:02.308441  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:02.353065  992344 cri.go:89] found id: ""
	I0314 19:26:02.353092  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.353101  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:02.353106  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:02.353183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:02.394305  992344 cri.go:89] found id: ""
	I0314 19:26:02.394342  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.394355  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:02.394365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:02.394443  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:02.441693  992344 cri.go:89] found id: ""
	I0314 19:26:02.441731  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.441743  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:02.441751  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:02.441816  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:02.479786  992344 cri.go:89] found id: ""
	I0314 19:26:02.479810  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.479818  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:02.479827  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:02.479858  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:02.494835  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:02.494865  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:02.660069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:02.660114  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:02.660134  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:02.732148  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:02.732187  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:02.780910  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:02.780942  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:02.833411  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:26:02.852441  991880 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:26:02.875033  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:26:02.891957  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:26:02.892003  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:26:02.892016  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:26:02.892034  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:26:02.892045  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:26:02.892062  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:26:02.892072  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:26:02.892087  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:26:02.892098  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:26:02.892109  991880 system_pods.go:74] duration metric: took 17.053651ms to wait for pod list to return data ...
	I0314 19:26:02.892122  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:26:02.896049  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:26:02.896076  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:26:02.896087  991880 node_conditions.go:105] duration metric: took 3.958558ms to run NodePressure ...
	I0314 19:26:02.896104  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:26:03.183167  991880 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187696  991880 kubeadm.go:733] kubelet initialised
	I0314 19:26:03.187722  991880 kubeadm.go:734] duration metric: took 4.517639ms waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187734  991880 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:03.193263  991880 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.198068  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198100  991880 pod_ready.go:81] duration metric: took 4.803067ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.198112  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198125  991880 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.202418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202440  991880 pod_ready.go:81] duration metric: took 4.299898ms for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.202453  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202458  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.207418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207440  991880 pod_ready.go:81] duration metric: took 4.975588ms for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.207447  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207453  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.278880  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278907  991880 pod_ready.go:81] duration metric: took 71.446692ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.278916  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278922  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.679262  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679298  991880 pod_ready.go:81] duration metric: took 400.3668ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.679308  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679315  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.078953  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.078992  991880 pod_ready.go:81] duration metric: took 399.668454ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.079014  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.079023  991880 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.479041  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479069  991880 pod_ready.go:81] duration metric: took 400.034338ms for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.479078  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479084  991880 pod_ready.go:38] duration metric: took 1.291340313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:04.479109  991880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:26:04.493423  991880 ops.go:34] apiserver oom_adj: -16
	I0314 19:26:04.493444  991880 kubeadm.go:591] duration metric: took 8.574611355s to restartPrimaryControlPlane
	I0314 19:26:04.493451  991880 kubeadm.go:393] duration metric: took 8.63778247s to StartCluster
	I0314 19:26:04.493495  991880 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.493576  991880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:26:04.495275  991880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.495648  991880 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:26:04.497346  991880 out.go:177] * Verifying Kubernetes components...
	I0314 19:26:04.495716  991880 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:26:04.495843  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:26:04.498678  991880 addons.go:69] Setting storage-provisioner=true in profile "no-preload-731976"
	I0314 19:26:04.498694  991880 addons.go:69] Setting metrics-server=true in profile "no-preload-731976"
	I0314 19:26:04.498719  991880 addons.go:234] Setting addon metrics-server=true in "no-preload-731976"
	I0314 19:26:04.498724  991880 addons.go:234] Setting addon storage-provisioner=true in "no-preload-731976"
	W0314 19:26:04.498725  991880 addons.go:243] addon metrics-server should already be in state true
	W0314 19:26:04.498735  991880 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:26:04.498755  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498764  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498685  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:26:04.498684  991880 addons.go:69] Setting default-storageclass=true in profile "no-preload-731976"
	I0314 19:26:04.498902  991880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-731976"
	I0314 19:26:04.499116  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499128  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499146  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499151  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499275  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499306  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.515926  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42749
	I0314 19:26:04.516541  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.517336  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.517380  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.517903  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.518496  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.518530  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.519804  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0314 19:26:04.519877  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
	I0314 19:26:04.520294  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520433  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520844  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.520874  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521224  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.521279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.521294  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521512  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.521839  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.522431  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.522462  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.524933  991880 addons.go:234] Setting addon default-storageclass=true in "no-preload-731976"
	W0314 19:26:04.524953  991880 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:26:04.524977  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.525238  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.525267  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.535073  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I0314 19:26:04.535555  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.536084  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.536110  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.536455  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.536608  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.537991  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0314 19:26:04.538320  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.538560  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.540272  991880 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:26:04.539087  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.541445  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.541556  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:26:04.541574  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:26:04.541590  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.541837  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.542000  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.544178  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.544425  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0314 19:26:04.545832  991880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:26:04.544882  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.544974  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.545887  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.545912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.545529  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.547028  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.547051  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.546085  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.547153  991880 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.547187  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:26:04.547205  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.547263  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.547420  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.547492  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.548137  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.548250  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.549851  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550280  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.550310  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550441  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.550642  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.550806  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.550933  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.594092  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0314 19:26:04.594507  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.595046  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.595068  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.595380  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.595600  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.597532  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.597819  991880 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.597841  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:26:04.597860  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.600392  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600790  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.600822  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600932  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.601112  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.601282  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.601422  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.698561  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:26:04.717893  991880 node_ready.go:35] waiting up to 6m0s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:04.789158  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.874271  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:26:04.874299  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:26:04.897643  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.915424  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:26:04.915447  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:26:04.962912  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:04.962936  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:26:05.037223  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:05.140432  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140464  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.140791  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.140832  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.140858  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140873  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.141237  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.141256  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.141341  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:05.147523  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.147539  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.147796  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.147815  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.021360  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.123678174s)
	I0314 19:26:06.021425  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.021439  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023327  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.023341  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.023364  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023384  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.023398  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023662  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.025042  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023698  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.063870  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026588061s)
	I0314 19:26:06.063950  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.063961  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064301  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.064325  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064381  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064395  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.064404  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064668  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064685  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064698  991880 addons.go:470] Verifying addon metrics-server=true in "no-preload-731976"
	I0314 19:26:06.066642  991880 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:26:02.945917  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.446120  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:03.906603  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.908049  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.359638  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:05.377722  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:05.377799  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:05.436279  992344 cri.go:89] found id: ""
	I0314 19:26:05.436316  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.436330  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:05.436338  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:05.436402  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:05.482775  992344 cri.go:89] found id: ""
	I0314 19:26:05.482822  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.482853  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:05.482861  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:05.482933  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:05.542954  992344 cri.go:89] found id: ""
	I0314 19:26:05.542986  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.542996  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:05.543003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:05.543069  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:05.582596  992344 cri.go:89] found id: ""
	I0314 19:26:05.582630  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.582643  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:05.582651  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:05.582716  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:05.623720  992344 cri.go:89] found id: ""
	I0314 19:26:05.623750  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.623762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:05.623770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:05.623828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:05.669868  992344 cri.go:89] found id: ""
	I0314 19:26:05.669946  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.669962  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:05.669974  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:05.670045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:05.718786  992344 cri.go:89] found id: ""
	I0314 19:26:05.718816  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.718827  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:05.718834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:05.718905  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:05.761781  992344 cri.go:89] found id: ""
	I0314 19:26:05.761817  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.761828  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:05.761841  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:05.761856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:05.826095  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:05.826131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:05.842893  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:05.842928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:05.937536  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:05.937567  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:05.937585  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:06.013419  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:06.013465  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:08.560995  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:08.576897  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:08.576964  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:08.617367  992344 cri.go:89] found id: ""
	I0314 19:26:08.617395  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.617406  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:08.617412  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:08.617471  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:08.655448  992344 cri.go:89] found id: ""
	I0314 19:26:08.655480  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.655492  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:08.655498  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:08.656004  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:08.696167  992344 cri.go:89] found id: ""
	I0314 19:26:08.696197  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.696206  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:08.696231  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:08.696294  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:06.067992  991880 addons.go:505] duration metric: took 1.572277081s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:26:06.722889  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:07.943306  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:09.945181  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.407517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:10.908715  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.736061  992344 cri.go:89] found id: ""
	I0314 19:26:08.736088  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.736096  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:08.736102  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:08.736168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:08.782458  992344 cri.go:89] found id: ""
	I0314 19:26:08.782490  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.782501  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:08.782508  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:08.782585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:08.833616  992344 cri.go:89] found id: ""
	I0314 19:26:08.833647  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.833659  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:08.833667  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:08.833734  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:08.875871  992344 cri.go:89] found id: ""
	I0314 19:26:08.875900  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.875909  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:08.875914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:08.875972  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:08.921763  992344 cri.go:89] found id: ""
	I0314 19:26:08.921793  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.921804  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:08.921816  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:08.921834  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:08.937716  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:08.937748  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:09.024271  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.024295  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:09.024309  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:09.098600  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:09.098636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:09.146178  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:09.146226  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:11.698261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:11.715209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:11.715285  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:11.758631  992344 cri.go:89] found id: ""
	I0314 19:26:11.758664  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.758680  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:11.758688  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:11.758758  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:11.798229  992344 cri.go:89] found id: ""
	I0314 19:26:11.798258  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.798268  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:11.798274  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:11.798341  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:11.838801  992344 cri.go:89] found id: ""
	I0314 19:26:11.838837  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.838849  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:11.838857  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:11.838925  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:11.884460  992344 cri.go:89] found id: ""
	I0314 19:26:11.884495  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.884507  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:11.884515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:11.884577  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:11.937743  992344 cri.go:89] found id: ""
	I0314 19:26:11.937770  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.937781  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:11.937789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:11.937852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:12.007509  992344 cri.go:89] found id: ""
	I0314 19:26:12.007542  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.007552  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:12.007561  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:12.007640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:12.068478  992344 cri.go:89] found id: ""
	I0314 19:26:12.068514  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.068523  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:12.068529  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:12.068592  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:12.108658  992344 cri.go:89] found id: ""
	I0314 19:26:12.108699  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.108712  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:12.108725  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:12.108754  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:12.195134  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:12.195170  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:12.240710  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:12.240746  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:12.297470  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:12.297506  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:12.312552  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:12.312581  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:12.392069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.222189  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:11.223717  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:12.226297  991880 node_ready.go:49] node "no-preload-731976" has status "Ready":"True"
	I0314 19:26:12.226328  991880 node_ready.go:38] duration metric: took 7.508398002s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:12.226343  991880 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:12.234015  991880 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242287  991880 pod_ready.go:92] pod "coredns-76f75df574-mcddh" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:12.242314  991880 pod_ready.go:81] duration metric: took 8.261811ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242325  991880 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252237  991880 pod_ready.go:92] pod "etcd-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:13.252268  991880 pod_ready.go:81] duration metric: took 1.00993426s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252277  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.443709  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.943804  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:13.407905  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:15.906891  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.907361  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.893036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:14.909532  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:14.909603  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:14.958974  992344 cri.go:89] found id: ""
	I0314 19:26:14.959001  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.959010  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:14.959016  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:14.959071  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:14.996462  992344 cri.go:89] found id: ""
	I0314 19:26:14.996496  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.996509  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:14.996516  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:14.996584  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:15.038159  992344 cri.go:89] found id: ""
	I0314 19:26:15.038192  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.038200  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:15.038214  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:15.038280  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:15.077455  992344 cri.go:89] found id: ""
	I0314 19:26:15.077486  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.077498  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:15.077506  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:15.077595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:15.117873  992344 cri.go:89] found id: ""
	I0314 19:26:15.117905  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.117914  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:15.117921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:15.117984  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:15.156493  992344 cri.go:89] found id: ""
	I0314 19:26:15.156528  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.156541  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:15.156549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:15.156615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:15.195036  992344 cri.go:89] found id: ""
	I0314 19:26:15.195065  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.195073  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:15.195079  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:15.195131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:15.237570  992344 cri.go:89] found id: ""
	I0314 19:26:15.237607  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.237619  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:15.237631  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:15.237646  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:15.323818  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:15.323871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:15.370068  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:15.370110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:15.425984  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:15.426018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.442475  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:15.442513  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:15.519714  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.019937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:18.036457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:18.036534  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:18.076226  992344 cri.go:89] found id: ""
	I0314 19:26:18.076256  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.076268  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:18.076275  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:18.076339  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:18.112355  992344 cri.go:89] found id: ""
	I0314 19:26:18.112390  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.112401  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:18.112409  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:18.112475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:18.148502  992344 cri.go:89] found id: ""
	I0314 19:26:18.148533  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.148544  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:18.148551  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:18.148625  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:18.185085  992344 cri.go:89] found id: ""
	I0314 19:26:18.185114  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.185121  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:18.185127  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:18.185192  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:18.226487  992344 cri.go:89] found id: ""
	I0314 19:26:18.226512  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.226520  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:18.226527  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:18.226595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:18.274014  992344 cri.go:89] found id: ""
	I0314 19:26:18.274044  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.274053  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:18.274062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:18.274155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:18.318696  992344 cri.go:89] found id: ""
	I0314 19:26:18.318729  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.318741  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:18.318749  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:18.318821  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:18.361430  992344 cri.go:89] found id: ""
	I0314 19:26:18.361459  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.361467  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:18.361477  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:18.361489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:18.442041  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.442062  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:18.442082  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:18.522821  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:18.522863  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:18.565896  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:18.565935  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:18.620887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:18.620924  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.268738  991880 pod_ready.go:102] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:16.758759  991880 pod_ready.go:92] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.758794  991880 pod_ready.go:81] duration metric: took 3.50650262s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.758807  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.763984  991880 pod_ready.go:92] pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.764010  991880 pod_ready.go:81] duration metric: took 5.192518ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.764021  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770418  991880 pod_ready.go:92] pod "kube-proxy-fkn7b" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.770442  991880 pod_ready.go:81] duration metric: took 6.412988ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770453  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775342  991880 pod_ready.go:92] pod "kube-scheduler-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.775367  991880 pod_ready.go:81] duration metric: took 4.906261ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775378  991880 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:18.782444  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.443755  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.446058  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.907866  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.407195  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.136379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:21.153065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:21.153159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:21.198345  992344 cri.go:89] found id: ""
	I0314 19:26:21.198376  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.198386  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:21.198393  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:21.198465  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:21.240699  992344 cri.go:89] found id: ""
	I0314 19:26:21.240738  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.240747  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:21.240753  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:21.240805  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:21.280891  992344 cri.go:89] found id: ""
	I0314 19:26:21.280978  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.280994  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:21.281004  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:21.281074  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:21.320316  992344 cri.go:89] found id: ""
	I0314 19:26:21.320348  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.320360  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:21.320369  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:21.320428  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:21.367972  992344 cri.go:89] found id: ""
	I0314 19:26:21.368006  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.368018  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:21.368024  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:21.368091  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:21.406060  992344 cri.go:89] found id: ""
	I0314 19:26:21.406090  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.406101  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:21.406108  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:21.406175  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:21.450885  992344 cri.go:89] found id: ""
	I0314 19:26:21.450908  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.450927  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:21.450933  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:21.450992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:21.497391  992344 cri.go:89] found id: ""
	I0314 19:26:21.497424  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.497436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:21.497453  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:21.497471  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:21.547789  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:21.547819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:21.604433  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:21.604482  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:21.619977  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:21.620005  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:21.695604  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:21.695629  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:21.695643  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:20.782765  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.786856  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.943234  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:23.943336  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:25.944005  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.407901  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:26.906562  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.274618  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:24.290815  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:24.290891  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:24.330657  992344 cri.go:89] found id: ""
	I0314 19:26:24.330694  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.330706  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:24.330718  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:24.330788  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:24.373140  992344 cri.go:89] found id: ""
	I0314 19:26:24.373192  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.373206  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:24.373214  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:24.373295  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:24.412131  992344 cri.go:89] found id: ""
	I0314 19:26:24.412161  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.412183  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:24.412191  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:24.412281  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:24.453506  992344 cri.go:89] found id: ""
	I0314 19:26:24.453535  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.453546  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:24.453554  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:24.453621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:24.495345  992344 cri.go:89] found id: ""
	I0314 19:26:24.495379  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.495391  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:24.495399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:24.495468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:24.534744  992344 cri.go:89] found id: ""
	I0314 19:26:24.534770  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.534779  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:24.534785  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:24.534847  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:24.573594  992344 cri.go:89] found id: ""
	I0314 19:26:24.573621  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.573629  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:24.573635  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:24.573685  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:24.612677  992344 cri.go:89] found id: ""
	I0314 19:26:24.612708  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.612718  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:24.612730  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:24.612747  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:24.664393  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:24.664426  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:24.679911  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:24.679945  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:24.767513  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:24.767560  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:24.767580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:24.853448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:24.853491  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.398576  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:27.414665  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:27.414749  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:27.461901  992344 cri.go:89] found id: ""
	I0314 19:26:27.461930  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.461938  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:27.461944  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:27.462009  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:27.502865  992344 cri.go:89] found id: ""
	I0314 19:26:27.502893  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.502902  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:27.502908  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:27.502966  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:27.542327  992344 cri.go:89] found id: ""
	I0314 19:26:27.542374  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.542387  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:27.542396  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:27.542484  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:27.583269  992344 cri.go:89] found id: ""
	I0314 19:26:27.583295  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.583304  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:27.583310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:27.583375  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:27.620426  992344 cri.go:89] found id: ""
	I0314 19:26:27.620467  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.620483  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:27.620491  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:27.620560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:27.659165  992344 cri.go:89] found id: ""
	I0314 19:26:27.659198  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.659214  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:27.659222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:27.659291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:27.701565  992344 cri.go:89] found id: ""
	I0314 19:26:27.701600  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.701609  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:27.701615  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:27.701706  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:27.739782  992344 cri.go:89] found id: ""
	I0314 19:26:27.739813  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.739822  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:27.739832  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:27.739847  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:27.757112  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:27.757146  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:27.844634  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:27.844670  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:27.844688  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:27.928687  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:27.928720  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.976582  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:27.976614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:25.282663  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:27.783551  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.443159  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.943660  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.908305  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.908486  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.536573  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:30.551552  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:30.551624  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.590498  992344 cri.go:89] found id: ""
	I0314 19:26:30.590528  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.590541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:30.590550  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:30.590612  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:30.629891  992344 cri.go:89] found id: ""
	I0314 19:26:30.629922  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.629945  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:30.629960  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:30.630031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:30.672557  992344 cri.go:89] found id: ""
	I0314 19:26:30.672592  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.672604  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:30.672611  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:30.672675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:30.709889  992344 cri.go:89] found id: ""
	I0314 19:26:30.709998  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.710026  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:30.710034  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:30.710103  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:30.749044  992344 cri.go:89] found id: ""
	I0314 19:26:30.749078  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.749090  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:30.749097  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:30.749167  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:30.794111  992344 cri.go:89] found id: ""
	I0314 19:26:30.794136  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.794146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:30.794154  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:30.794229  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:30.837175  992344 cri.go:89] found id: ""
	I0314 19:26:30.837204  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.837213  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:30.837220  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:30.837276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:30.875977  992344 cri.go:89] found id: ""
	I0314 19:26:30.876012  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.876026  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:30.876039  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:30.876077  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:30.965922  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:30.965963  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:31.011002  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:31.011041  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:31.067381  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:31.067415  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:31.082515  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:31.082547  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:31.158951  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:33.659376  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:33.673829  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:33.673889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.283175  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:32.285501  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.446301  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.942963  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.407396  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.906104  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.718619  992344 cri.go:89] found id: ""
	I0314 19:26:33.718655  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.718667  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:33.718675  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:33.718752  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:33.760408  992344 cri.go:89] found id: ""
	I0314 19:26:33.760443  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.760455  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:33.760463  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:33.760532  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:33.803648  992344 cri.go:89] found id: ""
	I0314 19:26:33.803683  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.803697  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:33.803706  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:33.803770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:33.845297  992344 cri.go:89] found id: ""
	I0314 19:26:33.845332  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.845344  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:33.845352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:33.845420  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:33.885826  992344 cri.go:89] found id: ""
	I0314 19:26:33.885862  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.885873  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:33.885881  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:33.885953  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:33.930611  992344 cri.go:89] found id: ""
	I0314 19:26:33.930641  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.930652  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:33.930659  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:33.930720  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:33.975523  992344 cri.go:89] found id: ""
	I0314 19:26:33.975558  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.975569  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:33.975592  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:33.975671  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:34.021004  992344 cri.go:89] found id: ""
	I0314 19:26:34.021039  992344 logs.go:276] 0 containers: []
	W0314 19:26:34.021048  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:34.021058  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:34.021072  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.066775  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:34.066808  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:34.123513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:34.123555  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:34.138355  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:34.138390  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:34.210698  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:34.210733  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:34.210752  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:36.801398  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:36.818486  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:36.818561  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:36.864485  992344 cri.go:89] found id: ""
	I0314 19:26:36.864510  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.864519  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:36.864525  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:36.864585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:36.908438  992344 cri.go:89] found id: ""
	I0314 19:26:36.908468  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.908478  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:36.908486  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:36.908554  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:36.947578  992344 cri.go:89] found id: ""
	I0314 19:26:36.947605  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.947613  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:36.947618  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:36.947664  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:36.985495  992344 cri.go:89] found id: ""
	I0314 19:26:36.985526  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.985537  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:36.985545  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:36.985609  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:37.027897  992344 cri.go:89] found id: ""
	I0314 19:26:37.027929  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.027947  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:37.027955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:37.028024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:37.066665  992344 cri.go:89] found id: ""
	I0314 19:26:37.066702  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.066716  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:37.066726  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:37.066818  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:37.104882  992344 cri.go:89] found id: ""
	I0314 19:26:37.104911  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.104920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:37.104926  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:37.104989  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:37.150288  992344 cri.go:89] found id: ""
	I0314 19:26:37.150318  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.150326  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:37.150338  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:37.150356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:37.207269  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:37.207314  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:37.222256  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:37.222290  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:37.305854  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:37.305879  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:37.305894  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:37.391306  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:37.391343  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.784650  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:37.283602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.444420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.943754  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.406563  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.407414  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.905944  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:39.939379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:39.955255  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:39.955317  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:39.996585  992344 cri.go:89] found id: ""
	I0314 19:26:39.996618  992344 logs.go:276] 0 containers: []
	W0314 19:26:39.996627  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:39.996633  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:39.996698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:40.038725  992344 cri.go:89] found id: ""
	I0314 19:26:40.038761  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.038774  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:40.038782  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:40.038846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:40.080619  992344 cri.go:89] found id: ""
	I0314 19:26:40.080656  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.080668  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:40.080677  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:40.080742  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:40.122120  992344 cri.go:89] found id: ""
	I0314 19:26:40.122163  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.122174  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:40.122182  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:40.122248  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:40.161563  992344 cri.go:89] found id: ""
	I0314 19:26:40.161594  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.161605  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:40.161612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:40.161680  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:40.200236  992344 cri.go:89] found id: ""
	I0314 19:26:40.200267  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.200278  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:40.200287  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:40.200358  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:40.237537  992344 cri.go:89] found id: ""
	I0314 19:26:40.237570  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.237581  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:40.237588  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:40.237657  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:40.293038  992344 cri.go:89] found id: ""
	I0314 19:26:40.293070  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.293078  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:40.293086  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:40.293110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:40.307710  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:40.307742  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:40.385255  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:40.385278  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:40.385312  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:40.469385  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:40.469421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:40.513030  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:40.513064  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.069286  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:43.086066  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:43.086183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:43.131373  992344 cri.go:89] found id: ""
	I0314 19:26:43.131400  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.131408  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:43.131414  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:43.131491  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:43.175283  992344 cri.go:89] found id: ""
	I0314 19:26:43.175311  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.175319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:43.175325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:43.175385  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:43.214979  992344 cri.go:89] found id: ""
	I0314 19:26:43.215006  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.215014  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:43.215020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:43.215072  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:43.252071  992344 cri.go:89] found id: ""
	I0314 19:26:43.252101  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.252110  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:43.252136  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:43.252200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:43.290310  992344 cri.go:89] found id: ""
	I0314 19:26:43.290341  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.290352  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:43.290359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:43.290426  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:43.330639  992344 cri.go:89] found id: ""
	I0314 19:26:43.330673  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.330684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:43.330692  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:43.330761  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:43.372669  992344 cri.go:89] found id: ""
	I0314 19:26:43.372698  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.372706  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:43.372712  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:43.372775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:43.416118  992344 cri.go:89] found id: ""
	I0314 19:26:43.416154  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.416171  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:43.416184  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:43.416225  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:43.501495  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:43.501541  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:43.545898  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:43.545932  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.601172  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:43.601205  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:43.616307  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:43.616339  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:43.699003  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:39.782316  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:41.783130  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:43.783259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.943853  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:45.446214  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:44.907328  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.406383  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:46.199661  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:46.214256  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:46.214325  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:46.263891  992344 cri.go:89] found id: ""
	I0314 19:26:46.263921  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.263932  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:46.263940  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:46.264006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:46.303515  992344 cri.go:89] found id: ""
	I0314 19:26:46.303542  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.303551  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:46.303558  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:46.303634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:46.346323  992344 cri.go:89] found id: ""
	I0314 19:26:46.346358  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.346371  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:46.346378  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:46.346444  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:46.388459  992344 cri.go:89] found id: ""
	I0314 19:26:46.388490  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.388500  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:46.388507  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:46.388560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:46.428907  992344 cri.go:89] found id: ""
	I0314 19:26:46.428945  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.428957  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:46.428966  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:46.429032  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:46.475683  992344 cri.go:89] found id: ""
	I0314 19:26:46.475713  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.475724  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:46.475737  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:46.475803  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:46.514509  992344 cri.go:89] found id: ""
	I0314 19:26:46.514543  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.514552  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:46.514558  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:46.514621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:46.553984  992344 cri.go:89] found id: ""
	I0314 19:26:46.554012  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.554023  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:46.554036  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:46.554054  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:46.615513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:46.615548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:46.630491  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:46.630525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:46.733214  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:46.733250  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:46.733267  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:46.832662  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:46.832699  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:45.783361  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:48.283626  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.943882  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.944184  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.409215  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.907278  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.382361  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:49.398159  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:49.398220  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:49.441989  992344 cri.go:89] found id: ""
	I0314 19:26:49.442017  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.442027  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:49.442034  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:49.442110  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:49.484456  992344 cri.go:89] found id: ""
	I0314 19:26:49.484492  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.484503  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:49.484520  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:49.484587  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:49.522409  992344 cri.go:89] found id: ""
	I0314 19:26:49.522438  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.522449  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:49.522456  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:49.522509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:49.556955  992344 cri.go:89] found id: ""
	I0314 19:26:49.556983  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.556991  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:49.556996  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:49.557045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:49.597924  992344 cri.go:89] found id: ""
	I0314 19:26:49.597960  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.597971  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:49.597987  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:49.598054  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:49.635744  992344 cri.go:89] found id: ""
	I0314 19:26:49.635780  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.635793  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:49.635801  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:49.635869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:49.678085  992344 cri.go:89] found id: ""
	I0314 19:26:49.678124  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.678136  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:49.678144  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:49.678247  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:49.714483  992344 cri.go:89] found id: ""
	I0314 19:26:49.714515  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.714527  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:49.714538  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:49.714554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:49.760438  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:49.760473  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:49.818954  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:49.818992  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:49.835609  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:49.835642  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:49.928723  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:49.928747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:49.928759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:52.517455  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:52.534986  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:52.535066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:52.580240  992344 cri.go:89] found id: ""
	I0314 19:26:52.580279  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.580292  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:52.580301  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:52.580367  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:52.644053  992344 cri.go:89] found id: ""
	I0314 19:26:52.644085  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.644096  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:52.644103  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:52.644171  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:52.706892  992344 cri.go:89] found id: ""
	I0314 19:26:52.706919  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.706928  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:52.706935  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:52.706986  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:52.761039  992344 cri.go:89] found id: ""
	I0314 19:26:52.761077  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.761090  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:52.761099  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:52.761173  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:52.806217  992344 cri.go:89] found id: ""
	I0314 19:26:52.806251  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.806263  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:52.806271  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:52.806415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:52.848417  992344 cri.go:89] found id: ""
	I0314 19:26:52.848448  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.848457  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:52.848464  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:52.848527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:52.890639  992344 cri.go:89] found id: ""
	I0314 19:26:52.890674  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.890687  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:52.890695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:52.890775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:52.934637  992344 cri.go:89] found id: ""
	I0314 19:26:52.934666  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.934677  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:52.934690  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:52.934707  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:52.949797  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:52.949825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:53.033720  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:53.033751  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:53.033766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:53.113919  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:53.113960  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:53.163483  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:53.163525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:50.781924  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:52.788346  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.945712  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:54.442871  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.443456  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:53.908184  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.407851  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:55.718119  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:55.733183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:55.733276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:55.778015  992344 cri.go:89] found id: ""
	I0314 19:26:55.778042  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.778050  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:55.778057  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:55.778146  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:55.829955  992344 cri.go:89] found id: ""
	I0314 19:26:55.829996  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.830011  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:55.830019  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:55.830089  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:55.872198  992344 cri.go:89] found id: ""
	I0314 19:26:55.872247  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.872260  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:55.872268  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:55.872327  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:55.916604  992344 cri.go:89] found id: ""
	I0314 19:26:55.916637  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.916649  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:55.916657  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:55.916725  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:55.957028  992344 cri.go:89] found id: ""
	I0314 19:26:55.957051  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.957060  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:55.957065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:55.957118  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:55.996640  992344 cri.go:89] found id: ""
	I0314 19:26:55.996671  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.996684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:55.996695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:55.996750  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:56.036638  992344 cri.go:89] found id: ""
	I0314 19:26:56.036688  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.036701  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:56.036709  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:56.036777  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:56.072594  992344 cri.go:89] found id: ""
	I0314 19:26:56.072624  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.072633  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:56.072643  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:56.072657  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:56.129011  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:56.129044  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:56.143042  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:56.143075  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:56.232545  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:56.232574  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:56.232589  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:56.317471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:56.317517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:55.282413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:57.283169  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.445079  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:00.942781  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.908918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.409159  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.864325  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:58.879029  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:58.879108  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:58.918490  992344 cri.go:89] found id: ""
	I0314 19:26:58.918519  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.918526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:58.918533  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:58.918598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:58.963392  992344 cri.go:89] found id: ""
	I0314 19:26:58.963423  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.963431  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:58.963437  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:58.963502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:59.007104  992344 cri.go:89] found id: ""
	I0314 19:26:59.007146  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.007158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:59.007166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:59.007235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:59.050075  992344 cri.go:89] found id: ""
	I0314 19:26:59.050114  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.050127  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:59.050138  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:59.050204  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:59.090262  992344 cri.go:89] found id: ""
	I0314 19:26:59.090289  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.090298  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:59.090303  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:59.090355  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:59.130556  992344 cri.go:89] found id: ""
	I0314 19:26:59.130584  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.130592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:59.130598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:59.130659  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:59.170640  992344 cri.go:89] found id: ""
	I0314 19:26:59.170670  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.170680  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:59.170689  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:59.170769  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:59.206456  992344 cri.go:89] found id: ""
	I0314 19:26:59.206494  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.206503  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:59.206513  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:59.206533  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:59.285760  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:59.285781  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:59.285793  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:59.363143  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:59.363182  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:59.415614  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:59.415655  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:59.470619  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:59.470661  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:01.987397  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:02.004152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:02.004243  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:02.050022  992344 cri.go:89] found id: ""
	I0314 19:27:02.050056  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.050068  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:02.050075  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:02.050144  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:02.089639  992344 cri.go:89] found id: ""
	I0314 19:27:02.089666  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.089674  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:02.089680  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:02.089740  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:02.128368  992344 cri.go:89] found id: ""
	I0314 19:27:02.128400  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.128409  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:02.128415  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:02.128468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:02.165609  992344 cri.go:89] found id: ""
	I0314 19:27:02.165651  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.165664  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:02.165672  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:02.165745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:02.204317  992344 cri.go:89] found id: ""
	I0314 19:27:02.204347  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.204359  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:02.204367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:02.204436  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:02.247897  992344 cri.go:89] found id: ""
	I0314 19:27:02.247931  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.247943  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:02.247951  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:02.248025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:02.287938  992344 cri.go:89] found id: ""
	I0314 19:27:02.287967  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.287979  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:02.287985  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:02.288057  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:02.324712  992344 cri.go:89] found id: ""
	I0314 19:27:02.324739  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.324751  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:02.324762  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:02.324779  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:02.400908  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:02.400932  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:02.400953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:02.489797  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:02.489830  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:02.540134  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:02.540168  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:02.599093  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:02.599128  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:59.283757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.782785  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:02.946825  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.447529  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:03.906952  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.909530  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.115036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:05.130479  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:05.130562  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:05.174573  992344 cri.go:89] found id: ""
	I0314 19:27:05.174605  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.174617  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:05.174624  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:05.174689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:05.212508  992344 cri.go:89] found id: ""
	I0314 19:27:05.212535  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.212546  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:05.212554  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:05.212621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:05.250714  992344 cri.go:89] found id: ""
	I0314 19:27:05.250750  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.250762  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:05.250770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:05.250839  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:05.291691  992344 cri.go:89] found id: ""
	I0314 19:27:05.291714  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.291722  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:05.291728  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:05.291775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:05.332275  992344 cri.go:89] found id: ""
	I0314 19:27:05.332302  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.332311  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:05.332318  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:05.332384  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:05.370048  992344 cri.go:89] found id: ""
	I0314 19:27:05.370075  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.370084  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:05.370090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:05.370163  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:05.413797  992344 cri.go:89] found id: ""
	I0314 19:27:05.413825  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.413836  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:05.413844  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:05.413909  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:05.454295  992344 cri.go:89] found id: ""
	I0314 19:27:05.454321  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.454329  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:05.454341  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:05.454359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:05.509578  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:05.509614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:05.525317  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:05.525347  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:05.607550  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:05.607576  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:05.607593  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:05.690865  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:05.690904  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.233183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:08.249612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:08.249679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:08.298188  992344 cri.go:89] found id: ""
	I0314 19:27:08.298226  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.298238  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:08.298247  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:08.298310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:08.339102  992344 cri.go:89] found id: ""
	I0314 19:27:08.339132  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.339141  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:08.339148  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:08.339208  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:08.377029  992344 cri.go:89] found id: ""
	I0314 19:27:08.377060  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.377068  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:08.377074  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:08.377131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:08.414418  992344 cri.go:89] found id: ""
	I0314 19:27:08.414450  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.414461  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:08.414468  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:08.414528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:08.454027  992344 cri.go:89] found id: ""
	I0314 19:27:08.454057  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.454068  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:08.454076  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:08.454134  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:08.494818  992344 cri.go:89] found id: ""
	I0314 19:27:08.494847  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.494856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:08.494863  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:08.494927  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:08.534522  992344 cri.go:89] found id: ""
	I0314 19:27:08.534557  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.534567  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:08.534575  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:08.534637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:08.572164  992344 cri.go:89] found id: ""
	I0314 19:27:08.572197  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.572241  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:08.572257  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:08.572275  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:08.588223  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:08.588261  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:08.675851  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:08.675877  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:08.675889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:04.282689  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:06.284132  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.783530  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:07.448817  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:09.944024  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.407848  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:10.408924  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:12.907004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.763975  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:08.764014  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.813516  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:08.813552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.370525  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:11.385556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:11.385645  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:11.426788  992344 cri.go:89] found id: ""
	I0314 19:27:11.426823  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.426831  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:11.426837  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:11.426910  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:11.465752  992344 cri.go:89] found id: ""
	I0314 19:27:11.465786  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.465794  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:11.465801  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:11.465849  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:11.506855  992344 cri.go:89] found id: ""
	I0314 19:27:11.506890  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.506904  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:11.506912  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:11.506973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:11.548844  992344 cri.go:89] found id: ""
	I0314 19:27:11.548880  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.548891  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:11.548900  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:11.548960  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:11.590828  992344 cri.go:89] found id: ""
	I0314 19:27:11.590861  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.590872  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:11.590880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:11.590952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:11.631863  992344 cri.go:89] found id: ""
	I0314 19:27:11.631892  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.631904  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:11.631913  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:11.631975  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:11.670204  992344 cri.go:89] found id: ""
	I0314 19:27:11.670230  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.670238  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:11.670244  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:11.670293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:11.711946  992344 cri.go:89] found id: ""
	I0314 19:27:11.711980  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.711991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:11.712005  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:11.712026  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.766647  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:11.766682  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:11.784449  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:11.784475  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:11.866503  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:11.866536  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:11.866552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:11.952506  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:11.952538  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:10.787653  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:13.282424  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:11.945870  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.444491  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.909465  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.918004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.502903  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:14.518020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:14.518084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:14.555486  992344 cri.go:89] found id: ""
	I0314 19:27:14.555528  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.555541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:14.555552  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:14.555615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:14.592068  992344 cri.go:89] found id: ""
	I0314 19:27:14.592102  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.592113  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:14.592121  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:14.592186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:14.633353  992344 cri.go:89] found id: ""
	I0314 19:27:14.633408  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.633418  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:14.633425  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:14.633490  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:14.670897  992344 cri.go:89] found id: ""
	I0314 19:27:14.670935  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.670947  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:14.670955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:14.671024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:14.713838  992344 cri.go:89] found id: ""
	I0314 19:27:14.713874  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.713884  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:14.713890  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:14.713957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:14.751113  992344 cri.go:89] found id: ""
	I0314 19:27:14.751143  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.751151  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:14.751158  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:14.751209  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:14.792485  992344 cri.go:89] found id: ""
	I0314 19:27:14.792518  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.792535  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:14.792542  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:14.792606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:14.839250  992344 cri.go:89] found id: ""
	I0314 19:27:14.839284  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.839297  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:14.839309  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:14.839325  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:14.880384  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:14.880421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:14.941515  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:14.941549  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:14.958810  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:14.958836  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:15.048586  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:15.048610  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:15.048625  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:17.640280  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:17.655841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:17.655901  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:17.698205  992344 cri.go:89] found id: ""
	I0314 19:27:17.698242  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.698254  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:17.698261  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:17.698315  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:17.740854  992344 cri.go:89] found id: ""
	I0314 19:27:17.740892  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.740903  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:17.740910  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:17.740980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:17.783317  992344 cri.go:89] found id: ""
	I0314 19:27:17.783409  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.783426  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:17.783434  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:17.783499  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:17.823514  992344 cri.go:89] found id: ""
	I0314 19:27:17.823541  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.823550  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:17.823556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:17.823606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:17.859249  992344 cri.go:89] found id: ""
	I0314 19:27:17.859288  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.859301  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:17.859310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:17.859386  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:17.900636  992344 cri.go:89] found id: ""
	I0314 19:27:17.900670  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.900688  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:17.900703  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:17.900770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:17.939927  992344 cri.go:89] found id: ""
	I0314 19:27:17.939959  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.939970  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:17.939979  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:17.940048  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:17.980507  992344 cri.go:89] found id: ""
	I0314 19:27:17.980539  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.980551  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:17.980563  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:17.980580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:18.037887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:18.037925  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:18.054506  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:18.054544  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:18.129987  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:18.130006  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:18.130018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:18.210364  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:18.210400  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:15.282905  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:17.283421  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.943922  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.448400  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.406315  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.407142  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:20.758599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:20.775419  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:20.775480  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:20.814427  992344 cri.go:89] found id: ""
	I0314 19:27:20.814457  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.814469  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:20.814476  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:20.814528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:20.851020  992344 cri.go:89] found id: ""
	I0314 19:27:20.851056  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.851069  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:20.851077  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:20.851150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:20.894746  992344 cri.go:89] found id: ""
	I0314 19:27:20.894775  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.894784  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:20.894790  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:20.894856  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:20.932852  992344 cri.go:89] found id: ""
	I0314 19:27:20.932884  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.932895  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:20.932903  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:20.932962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:20.977294  992344 cri.go:89] found id: ""
	I0314 19:27:20.977329  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.977341  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:20.977349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:20.977417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:21.018980  992344 cri.go:89] found id: ""
	I0314 19:27:21.019016  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.019027  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:21.019036  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:21.019102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:21.058764  992344 cri.go:89] found id: ""
	I0314 19:27:21.058817  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.058832  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:21.058841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:21.058915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:21.098126  992344 cri.go:89] found id: ""
	I0314 19:27:21.098168  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.098181  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:21.098194  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:21.098211  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:21.154456  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:21.154490  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:21.170919  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:21.170950  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:21.247945  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:21.247973  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:21.247991  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:21.345152  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:21.345193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:19.782502  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.783005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.944293  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.945097  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.442970  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.907276  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:25.907425  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:27.907517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.900146  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:23.917834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:23.917896  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:23.959759  992344 cri.go:89] found id: ""
	I0314 19:27:23.959787  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.959800  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:23.959808  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:23.959875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:23.999841  992344 cri.go:89] found id: ""
	I0314 19:27:23.999871  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.999880  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:23.999887  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:23.999942  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:24.044031  992344 cri.go:89] found id: ""
	I0314 19:27:24.044063  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.044072  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:24.044078  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:24.044149  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:24.089895  992344 cri.go:89] found id: ""
	I0314 19:27:24.089931  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.089944  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:24.089955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:24.090023  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:24.131286  992344 cri.go:89] found id: ""
	I0314 19:27:24.131319  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.131331  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:24.131338  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:24.131409  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:24.169376  992344 cri.go:89] found id: ""
	I0314 19:27:24.169408  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.169420  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:24.169428  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:24.169495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:24.215123  992344 cri.go:89] found id: ""
	I0314 19:27:24.215150  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.215159  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:24.215165  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:24.215219  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:24.257440  992344 cri.go:89] found id: ""
	I0314 19:27:24.257476  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.257484  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:24.257494  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:24.257508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.311885  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:24.311916  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:24.326375  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:24.326403  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:24.403176  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:24.403207  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:24.403227  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:24.485890  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:24.485928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.032675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:27.050221  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:27.050310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:27.091708  992344 cri.go:89] found id: ""
	I0314 19:27:27.091739  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.091750  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:27.091761  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:27.091828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:27.135279  992344 cri.go:89] found id: ""
	I0314 19:27:27.135317  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.135329  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:27.135337  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:27.135407  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:27.178163  992344 cri.go:89] found id: ""
	I0314 19:27:27.178194  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.178203  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:27.178209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:27.178259  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:27.220298  992344 cri.go:89] found id: ""
	I0314 19:27:27.220331  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.220341  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:27.220367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:27.220423  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:27.262087  992344 cri.go:89] found id: ""
	I0314 19:27:27.262122  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.262135  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:27.262143  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:27.262305  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:27.304543  992344 cri.go:89] found id: ""
	I0314 19:27:27.304576  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.304587  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:27.304597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:27.304668  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:27.343860  992344 cri.go:89] found id: ""
	I0314 19:27:27.343889  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.343899  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:27.343905  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:27.343974  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:27.383608  992344 cri.go:89] found id: ""
	I0314 19:27:27.383639  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.383649  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:27.383659  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:27.383673  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:27.398443  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:27.398478  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:27.485215  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:27.485240  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:27.485254  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:27.564067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:27.564110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.608472  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:27.608511  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.284517  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.783079  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:28.443938  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.445579  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:29.908018  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.406717  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.169228  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:30.183876  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:30.183952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:30.229357  992344 cri.go:89] found id: ""
	I0314 19:27:30.229390  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.229401  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:30.229407  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:30.229474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:30.272970  992344 cri.go:89] found id: ""
	I0314 19:27:30.273007  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.273021  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:30.273030  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:30.273116  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:30.314939  992344 cri.go:89] found id: ""
	I0314 19:27:30.314968  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.314976  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:30.314982  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:30.315031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:30.350602  992344 cri.go:89] found id: ""
	I0314 19:27:30.350633  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.350644  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:30.350652  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:30.350739  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:30.393907  992344 cri.go:89] found id: ""
	I0314 19:27:30.393939  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.393950  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:30.393958  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:30.394029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:30.431943  992344 cri.go:89] found id: ""
	I0314 19:27:30.431974  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.431983  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:30.431991  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:30.432058  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:30.471873  992344 cri.go:89] found id: ""
	I0314 19:27:30.471900  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.471910  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:30.471918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:30.471981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:30.508842  992344 cri.go:89] found id: ""
	I0314 19:27:30.508865  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.508872  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:30.508882  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:30.508896  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:30.587441  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:30.587471  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:30.587489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:30.670580  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:30.670618  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:30.719846  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:30.719882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:30.779463  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:30.779508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.296251  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:33.311393  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:33.311452  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:33.351846  992344 cri.go:89] found id: ""
	I0314 19:27:33.351879  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.351889  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:33.351898  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:33.351965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:33.396392  992344 cri.go:89] found id: ""
	I0314 19:27:33.396494  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.396523  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:33.396546  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:33.396637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:33.451093  992344 cri.go:89] found id: ""
	I0314 19:27:33.451120  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.451130  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:33.451149  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:33.451225  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:33.511427  992344 cri.go:89] found id: ""
	I0314 19:27:33.511474  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.511487  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:33.511495  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:33.511570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:33.560459  992344 cri.go:89] found id: ""
	I0314 19:27:33.560488  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.560500  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:33.560509  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:33.560579  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:33.601454  992344 cri.go:89] found id: ""
	I0314 19:27:33.601491  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.601503  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:33.601512  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:33.601588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:33.640991  992344 cri.go:89] found id: ""
	I0314 19:27:33.641029  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.641042  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:33.641050  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:33.641115  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:33.684359  992344 cri.go:89] found id: ""
	I0314 19:27:33.684390  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.684398  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:33.684412  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:33.684436  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.699551  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:33.699583  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:27:29.285022  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:31.782413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:33.782490  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.942996  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.943285  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.407243  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.407509  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:27:33.781859  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:33.781893  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:33.781909  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:33.864992  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:33.865036  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:33.911670  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:33.911712  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.466570  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:36.483515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:36.483611  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:36.522488  992344 cri.go:89] found id: ""
	I0314 19:27:36.522521  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.522533  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:36.522549  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:36.522607  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:36.561676  992344 cri.go:89] found id: ""
	I0314 19:27:36.561714  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.561728  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:36.561737  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:36.561810  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:36.604512  992344 cri.go:89] found id: ""
	I0314 19:27:36.604547  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.604559  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:36.604568  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:36.604640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:36.645387  992344 cri.go:89] found id: ""
	I0314 19:27:36.645416  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.645425  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:36.645430  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:36.645495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:36.682951  992344 cri.go:89] found id: ""
	I0314 19:27:36.682976  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.682984  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:36.682989  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:36.683040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:36.725464  992344 cri.go:89] found id: ""
	I0314 19:27:36.725517  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.725530  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:36.725538  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:36.725601  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:36.766542  992344 cri.go:89] found id: ""
	I0314 19:27:36.766578  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.766590  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:36.766598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:36.766663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:36.809745  992344 cri.go:89] found id: ""
	I0314 19:27:36.809773  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.809782  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:36.809791  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:36.809805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.863035  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:36.863069  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:36.877162  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:36.877195  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:36.952727  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:36.952747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:36.952759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:37.035914  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:37.035953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:35.783567  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:37.786255  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.944521  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.445911  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:41.446549  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:38.409392  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:40.914692  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.581600  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:39.595798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:39.595875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:39.635374  992344 cri.go:89] found id: ""
	I0314 19:27:39.635406  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.635418  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:39.635426  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:39.635488  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:39.674527  992344 cri.go:89] found id: ""
	I0314 19:27:39.674560  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.674571  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:39.674579  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:39.674649  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:39.714313  992344 cri.go:89] found id: ""
	I0314 19:27:39.714357  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.714370  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:39.714380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:39.714449  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:39.754346  992344 cri.go:89] found id: ""
	I0314 19:27:39.754383  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.754395  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:39.754402  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:39.754468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:39.799448  992344 cri.go:89] found id: ""
	I0314 19:27:39.799481  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.799493  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:39.799500  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:39.799551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:39.841550  992344 cri.go:89] found id: ""
	I0314 19:27:39.841582  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.841592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:39.841601  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:39.841673  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:39.878581  992344 cri.go:89] found id: ""
	I0314 19:27:39.878612  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.878624  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:39.878630  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:39.878681  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:39.917419  992344 cri.go:89] found id: ""
	I0314 19:27:39.917444  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.917454  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:39.917465  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:39.917480  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:39.976304  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:39.976340  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:39.993786  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:39.993825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:40.074428  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.074458  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:40.074481  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:40.156135  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:40.156177  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:42.700758  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:42.716600  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:42.716672  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:42.763646  992344 cri.go:89] found id: ""
	I0314 19:27:42.763682  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.763694  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:42.763702  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:42.763770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:42.804246  992344 cri.go:89] found id: ""
	I0314 19:27:42.804280  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.804288  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:42.804295  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:42.804360  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:42.847415  992344 cri.go:89] found id: ""
	I0314 19:27:42.847445  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.847455  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:42.847463  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:42.847527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:42.884340  992344 cri.go:89] found id: ""
	I0314 19:27:42.884376  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.884386  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:42.884395  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:42.884464  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:42.923583  992344 cri.go:89] found id: ""
	I0314 19:27:42.923615  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.923634  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:42.923642  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:42.923704  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:42.969164  992344 cri.go:89] found id: ""
	I0314 19:27:42.969195  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.969207  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:42.969215  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:42.969291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:43.013760  992344 cri.go:89] found id: ""
	I0314 19:27:43.013793  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.013802  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:43.013808  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:43.013881  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:43.056930  992344 cri.go:89] found id: ""
	I0314 19:27:43.056964  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.056976  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:43.056989  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:43.057004  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:43.145067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:43.145104  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:43.196679  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:43.196714  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:43.252329  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:43.252363  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:43.268635  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:43.268663  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:43.353391  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.284010  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:42.784684  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.447800  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.943282  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.409130  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.908067  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.853793  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:45.867904  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:45.867971  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:45.909352  992344 cri.go:89] found id: ""
	I0314 19:27:45.909376  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.909387  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:45.909394  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:45.909451  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:45.950885  992344 cri.go:89] found id: ""
	I0314 19:27:45.950920  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.950931  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:45.950939  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:45.951006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:45.987907  992344 cri.go:89] found id: ""
	I0314 19:27:45.987940  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.987951  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:45.987959  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:45.988025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:46.026894  992344 cri.go:89] found id: ""
	I0314 19:27:46.026930  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.026942  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:46.026950  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:46.027047  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:46.074867  992344 cri.go:89] found id: ""
	I0314 19:27:46.074901  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.074911  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:46.074918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:46.074981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:46.111516  992344 cri.go:89] found id: ""
	I0314 19:27:46.111551  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.111562  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:46.111570  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:46.111633  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:46.151560  992344 cri.go:89] found id: ""
	I0314 19:27:46.151590  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.151601  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:46.151610  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:46.151674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:46.191684  992344 cri.go:89] found id: ""
	I0314 19:27:46.191719  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.191730  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:46.191742  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:46.191757  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:46.245152  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:46.245189  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:46.261705  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:46.261741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:46.342381  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:46.342409  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:46.342424  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:46.437995  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:46.438031  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:45.283412  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:47.782838  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.443371  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.446353  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.406887  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.408726  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.410088  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.981814  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:48.998620  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:48.998689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:49.040608  992344 cri.go:89] found id: ""
	I0314 19:27:49.040643  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.040653  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:49.040659  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:49.040711  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:49.083505  992344 cri.go:89] found id: ""
	I0314 19:27:49.083531  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.083539  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:49.083544  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:49.083606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:49.127355  992344 cri.go:89] found id: ""
	I0314 19:27:49.127383  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.127391  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:49.127399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:49.127472  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:49.165694  992344 cri.go:89] found id: ""
	I0314 19:27:49.165726  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.165738  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:49.165746  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:49.165813  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:49.209407  992344 cri.go:89] found id: ""
	I0314 19:27:49.209440  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.209449  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:49.209455  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:49.209516  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:49.250450  992344 cri.go:89] found id: ""
	I0314 19:27:49.250482  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.250493  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:49.250499  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:49.250560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:49.294041  992344 cri.go:89] found id: ""
	I0314 19:27:49.294070  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.294079  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:49.294085  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:49.294150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:49.333664  992344 cri.go:89] found id: ""
	I0314 19:27:49.333706  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.333719  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:49.333731  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:49.333749  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:49.348323  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:49.348351  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:49.428896  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:49.428917  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:49.428929  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:49.510395  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:49.510431  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:49.553630  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:49.553669  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.105763  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:52.120888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:52.120956  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:52.158143  992344 cri.go:89] found id: ""
	I0314 19:27:52.158174  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.158188  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:52.158196  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:52.158271  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:52.198254  992344 cri.go:89] found id: ""
	I0314 19:27:52.198285  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.198294  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:52.198299  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:52.198372  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:52.237973  992344 cri.go:89] found id: ""
	I0314 19:27:52.238001  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.238009  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:52.238015  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:52.238066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:52.283766  992344 cri.go:89] found id: ""
	I0314 19:27:52.283798  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.283809  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:52.283817  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:52.283889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:52.325861  992344 cri.go:89] found id: ""
	I0314 19:27:52.325896  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.325906  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:52.325914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:52.325983  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:52.367582  992344 cri.go:89] found id: ""
	I0314 19:27:52.367612  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.367622  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:52.367631  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:52.367698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:52.405009  992344 cri.go:89] found id: ""
	I0314 19:27:52.405043  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.405054  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:52.405062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:52.405125  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:52.447560  992344 cri.go:89] found id: ""
	I0314 19:27:52.447584  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.447594  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:52.447605  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:52.447620  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:52.519023  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:52.519048  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:52.519062  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:52.603256  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:52.603297  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:52.650926  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:52.650957  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.708743  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:52.708784  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:50.284259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.286540  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.944152  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.446257  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:54.910578  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.407194  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.225549  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:55.242914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:55.242992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:55.284249  992344 cri.go:89] found id: ""
	I0314 19:27:55.284280  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.284291  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:55.284298  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:55.284362  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:55.333784  992344 cri.go:89] found id: ""
	I0314 19:27:55.333821  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.333833  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:55.333840  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:55.333916  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:55.375444  992344 cri.go:89] found id: ""
	I0314 19:27:55.375498  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.375511  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:55.375519  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:55.375598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:55.416225  992344 cri.go:89] found id: ""
	I0314 19:27:55.416259  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.416269  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:55.416276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:55.416340  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:55.461097  992344 cri.go:89] found id: ""
	I0314 19:27:55.461138  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.461150  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:55.461166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:55.461235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:55.504621  992344 cri.go:89] found id: ""
	I0314 19:27:55.504659  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.504670  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:55.504679  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:55.504755  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:55.545075  992344 cri.go:89] found id: ""
	I0314 19:27:55.545111  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.545123  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:55.545130  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:55.545221  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:55.584137  992344 cri.go:89] found id: ""
	I0314 19:27:55.584197  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.584235  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:55.584252  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:55.584274  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:55.642705  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:55.642741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:55.657487  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:55.657516  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:55.738379  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:55.738414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:55.738432  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:55.827582  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:55.827621  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:58.374265  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:58.389764  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:58.389878  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:58.431760  992344 cri.go:89] found id: ""
	I0314 19:27:58.431798  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.431810  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:58.431818  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:58.431880  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:58.471389  992344 cri.go:89] found id: ""
	I0314 19:27:58.471415  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.471424  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:58.471430  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:58.471478  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:58.508875  992344 cri.go:89] found id: ""
	I0314 19:27:58.508903  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.508910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:58.508916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:58.508965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:58.546016  992344 cri.go:89] found id: ""
	I0314 19:27:58.546042  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.546051  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:58.546057  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:58.546106  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:58.586319  992344 cri.go:89] found id: ""
	I0314 19:27:58.586346  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.586354  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:58.586360  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:58.586414  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:58.625381  992344 cri.go:89] found id: ""
	I0314 19:27:58.625411  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.625423  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:58.625431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:58.625494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:58.663016  992344 cri.go:89] found id: ""
	I0314 19:27:58.663047  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.663059  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:58.663068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:58.663131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:58.703100  992344 cri.go:89] found id: ""
	I0314 19:27:58.703144  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.703159  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:58.703172  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:58.703190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:54.782936  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:56.783549  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.783602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.943515  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:00.443787  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:59.908025  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:01.908119  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.755081  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:58.755116  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:58.770547  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:58.770577  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:58.850354  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:58.850379  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:58.850395  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:58.944115  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:58.944152  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.489937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:01.505233  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:01.505309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:01.544381  992344 cri.go:89] found id: ""
	I0314 19:28:01.544417  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.544429  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:01.544437  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:01.544502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:01.582639  992344 cri.go:89] found id: ""
	I0314 19:28:01.582668  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.582676  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:01.582684  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:01.582745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:01.621926  992344 cri.go:89] found id: ""
	I0314 19:28:01.621957  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.621968  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:01.621976  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:01.622040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:01.659749  992344 cri.go:89] found id: ""
	I0314 19:28:01.659779  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.659791  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:01.659798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:01.659869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:01.696467  992344 cri.go:89] found id: ""
	I0314 19:28:01.696497  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.696505  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:01.696511  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:01.696570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:01.735273  992344 cri.go:89] found id: ""
	I0314 19:28:01.735301  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.735310  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:01.735316  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:01.735381  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:01.777051  992344 cri.go:89] found id: ""
	I0314 19:28:01.777081  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.777090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:01.777096  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:01.777155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:01.820851  992344 cri.go:89] found id: ""
	I0314 19:28:01.820883  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.820894  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:01.820911  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:01.820926  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:01.874599  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:01.874632  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:01.888971  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:01.889007  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:01.971786  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:01.971806  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:01.971819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:02.064070  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:02.064114  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.283565  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.782799  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:02.446196  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.944339  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.917838  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:06.409597  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.610064  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:04.625349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:04.625417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:04.664254  992344 cri.go:89] found id: ""
	I0314 19:28:04.664284  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.664293  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:04.664299  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:04.664348  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:04.704466  992344 cri.go:89] found id: ""
	I0314 19:28:04.704502  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.704514  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:04.704523  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:04.704588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:04.745733  992344 cri.go:89] found id: ""
	I0314 19:28:04.745762  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.745773  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:04.745781  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:04.745846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:04.790435  992344 cri.go:89] found id: ""
	I0314 19:28:04.790465  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.790477  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:04.790485  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:04.790550  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:04.829215  992344 cri.go:89] found id: ""
	I0314 19:28:04.829255  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.829268  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:04.829276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:04.829343  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:04.874200  992344 cri.go:89] found id: ""
	I0314 19:28:04.874234  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.874246  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:04.874253  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:04.874318  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:04.915882  992344 cri.go:89] found id: ""
	I0314 19:28:04.915909  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.915920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:04.915928  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:04.915994  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:04.954000  992344 cri.go:89] found id: ""
	I0314 19:28:04.954027  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.954038  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:04.954049  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:04.954063  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:04.996511  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:04.996540  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:05.049608  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:05.049644  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:05.064401  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:05.064437  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:05.145169  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:05.145189  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:05.145202  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:07.734535  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:07.765003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:07.765099  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:07.814489  992344 cri.go:89] found id: ""
	I0314 19:28:07.814518  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.814526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:07.814532  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:07.814595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:07.854337  992344 cri.go:89] found id: ""
	I0314 19:28:07.854368  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.854378  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:07.854384  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:07.854455  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:07.894430  992344 cri.go:89] found id: ""
	I0314 19:28:07.894465  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.894479  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:07.894487  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:07.894551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:07.939473  992344 cri.go:89] found id: ""
	I0314 19:28:07.939504  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.939515  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:07.939524  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:07.939591  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:07.982584  992344 cri.go:89] found id: ""
	I0314 19:28:07.982627  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.982640  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:07.982649  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:07.982710  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:08.020038  992344 cri.go:89] found id: ""
	I0314 19:28:08.020065  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.020074  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:08.020080  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:08.020138  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:08.058377  992344 cri.go:89] found id: ""
	I0314 19:28:08.058412  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.058423  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:08.058431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:08.058509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:08.096241  992344 cri.go:89] found id: ""
	I0314 19:28:08.096273  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.096284  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:08.096294  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:08.096308  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:08.174276  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:08.174315  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:08.221249  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:08.221282  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:08.273899  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:08.273930  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:08.290166  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:08.290193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:08.382154  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:06.283073  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.784385  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:07.447546  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:09.448168  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.906422  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.907030  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.882385  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:10.898126  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:10.898200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:10.939972  992344 cri.go:89] found id: ""
	I0314 19:28:10.940001  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.940012  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:10.940019  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:10.940084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:10.985154  992344 cri.go:89] found id: ""
	I0314 19:28:10.985187  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.985199  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:10.985212  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:10.985278  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:11.023955  992344 cri.go:89] found id: ""
	I0314 19:28:11.024004  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.024017  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:11.024025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:11.024094  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:11.065508  992344 cri.go:89] found id: ""
	I0314 19:28:11.065534  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.065543  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:11.065549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:11.065620  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:11.103903  992344 cri.go:89] found id: ""
	I0314 19:28:11.103930  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.103938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:11.103944  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:11.103997  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:11.146820  992344 cri.go:89] found id: ""
	I0314 19:28:11.146856  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.146866  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:11.146873  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:11.146930  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:11.195840  992344 cri.go:89] found id: ""
	I0314 19:28:11.195871  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.195880  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:11.195888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:11.195957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:11.237594  992344 cri.go:89] found id: ""
	I0314 19:28:11.237628  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.237647  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:11.237658  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:11.237671  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:11.297323  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:11.297356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:11.313785  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:11.313815  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:11.393416  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:11.393444  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:11.393461  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:11.472938  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:11.472972  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:11.283364  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.283657  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:11.945477  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.443000  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.406341  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:15.905918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.907047  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.025870  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:14.039597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:14.039667  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:14.076786  992344 cri.go:89] found id: ""
	I0314 19:28:14.076822  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.076834  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:14.076842  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:14.076911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:14.114754  992344 cri.go:89] found id: ""
	I0314 19:28:14.114796  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.114815  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:14.114823  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:14.114893  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:14.158360  992344 cri.go:89] found id: ""
	I0314 19:28:14.158396  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.158408  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:14.158417  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:14.158489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:14.208587  992344 cri.go:89] found id: ""
	I0314 19:28:14.208626  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.208638  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:14.208646  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:14.208712  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:14.253013  992344 cri.go:89] found id: ""
	I0314 19:28:14.253049  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.253062  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:14.253071  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:14.253142  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:14.313793  992344 cri.go:89] found id: ""
	I0314 19:28:14.313830  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.313843  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:14.313851  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:14.313918  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:14.352044  992344 cri.go:89] found id: ""
	I0314 19:28:14.352076  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.352087  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:14.352094  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:14.352161  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:14.389393  992344 cri.go:89] found id: ""
	I0314 19:28:14.389427  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.389436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:14.389446  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:14.389464  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:14.447873  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:14.447914  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:14.462610  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:14.462636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:14.543393  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:14.543414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:14.543427  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:14.628147  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:14.628190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.177617  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:17.193408  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:17.193481  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:17.233133  992344 cri.go:89] found id: ""
	I0314 19:28:17.233161  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.233170  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:17.233183  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:17.233252  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:17.270429  992344 cri.go:89] found id: ""
	I0314 19:28:17.270459  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.270471  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:17.270479  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:17.270559  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:17.309915  992344 cri.go:89] found id: ""
	I0314 19:28:17.309939  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.309947  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:17.309952  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:17.309999  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:17.347157  992344 cri.go:89] found id: ""
	I0314 19:28:17.347188  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.347199  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:17.347206  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:17.347269  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:17.388837  992344 cri.go:89] found id: ""
	I0314 19:28:17.388866  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.388877  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:17.388884  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:17.388948  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:17.425945  992344 cri.go:89] found id: ""
	I0314 19:28:17.425969  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.425977  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:17.425983  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:17.426051  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:17.470291  992344 cri.go:89] found id: ""
	I0314 19:28:17.470320  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.470356  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:17.470365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:17.470424  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:17.507512  992344 cri.go:89] found id: ""
	I0314 19:28:17.507541  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.507549  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:17.507559  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:17.507575  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.550148  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:17.550186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:17.603728  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:17.603759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:17.619160  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:17.619186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:17.699649  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:17.699683  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:17.699701  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:15.782780  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.783204  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:16.942941  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:18.943420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:21.450329  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.407873  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.905658  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.284486  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:20.300132  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:20.300198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:20.341566  992344 cri.go:89] found id: ""
	I0314 19:28:20.341608  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.341620  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:20.341629  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:20.341700  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:20.379527  992344 cri.go:89] found id: ""
	I0314 19:28:20.379555  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.379562  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:20.379568  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:20.379640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:20.425871  992344 cri.go:89] found id: ""
	I0314 19:28:20.425902  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.425910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:20.425916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:20.425980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:20.464939  992344 cri.go:89] found id: ""
	I0314 19:28:20.464979  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.464993  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:20.465003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:20.465075  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:20.500954  992344 cri.go:89] found id: ""
	I0314 19:28:20.500982  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.500993  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:20.501001  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:20.501063  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:20.542049  992344 cri.go:89] found id: ""
	I0314 19:28:20.542080  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.542090  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:20.542098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:20.542178  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:20.577298  992344 cri.go:89] found id: ""
	I0314 19:28:20.577325  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.577333  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:20.577340  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:20.577389  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.618467  992344 cri.go:89] found id: ""
	I0314 19:28:20.618498  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.618511  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:20.618523  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:20.618537  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:20.694238  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:20.694280  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:20.694298  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:20.778845  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:20.778882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:20.821575  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:20.821606  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:20.876025  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:20.876061  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.391129  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:23.408183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:23.408276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:23.449128  992344 cri.go:89] found id: ""
	I0314 19:28:23.449169  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.449180  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:23.449186  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:23.449276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:23.486168  992344 cri.go:89] found id: ""
	I0314 19:28:23.486201  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.486223  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:23.486242  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:23.486299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:23.525452  992344 cri.go:89] found id: ""
	I0314 19:28:23.525484  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.525492  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:23.525498  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:23.525553  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:23.560947  992344 cri.go:89] found id: ""
	I0314 19:28:23.560982  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.561037  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:23.561054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:23.561121  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:23.607261  992344 cri.go:89] found id: ""
	I0314 19:28:23.607298  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.607310  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:23.607317  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:23.607392  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:23.646849  992344 cri.go:89] found id: ""
	I0314 19:28:23.646881  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.646891  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:23.646896  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:23.646962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:23.684108  992344 cri.go:89] found id: ""
	I0314 19:28:23.684133  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.684140  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:23.684146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:23.684197  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.283546  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.783059  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.942845  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:25.943049  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:24.905817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.908404  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.723284  992344 cri.go:89] found id: ""
	I0314 19:28:23.723320  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.723331  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:23.723343  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:23.723359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:23.785024  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:23.785066  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.801136  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:23.801167  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:23.875721  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:23.875749  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:23.875766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:23.969377  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:23.969420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:26.517771  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:26.533260  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:26.533349  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:26.573712  992344 cri.go:89] found id: ""
	I0314 19:28:26.573750  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.573762  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:26.573770  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:26.573846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:26.610738  992344 cri.go:89] found id: ""
	I0314 19:28:26.610768  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.610777  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:26.610783  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:26.610836  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:26.652014  992344 cri.go:89] found id: ""
	I0314 19:28:26.652041  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.652049  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:26.652054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:26.652109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:26.687344  992344 cri.go:89] found id: ""
	I0314 19:28:26.687377  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.687389  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:26.687398  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:26.687466  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:26.725897  992344 cri.go:89] found id: ""
	I0314 19:28:26.725926  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.725938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:26.725945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:26.726008  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:26.772328  992344 cri.go:89] found id: ""
	I0314 19:28:26.772357  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.772367  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:26.772375  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:26.772440  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:26.814721  992344 cri.go:89] found id: ""
	I0314 19:28:26.814757  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.814768  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:26.814776  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:26.814841  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:26.849726  992344 cri.go:89] found id: ""
	I0314 19:28:26.849763  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.849781  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:26.849794  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:26.849811  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:26.932680  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:26.932709  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:26.932725  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:27.011721  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:27.011787  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:27.059121  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:27.059160  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:27.110392  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:27.110430  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:24.783160  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.783703  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:27.943562  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.943880  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.405973  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.407122  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.625784  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:29.642945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:29.643024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:29.681233  992344 cri.go:89] found id: ""
	I0314 19:28:29.681267  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.681279  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:29.681286  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:29.681351  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:29.729735  992344 cri.go:89] found id: ""
	I0314 19:28:29.729764  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.729773  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:29.729779  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:29.729835  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:29.773873  992344 cri.go:89] found id: ""
	I0314 19:28:29.773902  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.773911  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:29.773918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:29.773973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:29.815982  992344 cri.go:89] found id: ""
	I0314 19:28:29.816009  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.816019  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:29.816025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:29.816102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:29.855295  992344 cri.go:89] found id: ""
	I0314 19:28:29.855328  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.855343  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:29.855349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:29.855404  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:29.893580  992344 cri.go:89] found id: ""
	I0314 19:28:29.893618  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.893630  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:29.893638  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:29.893705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:29.939721  992344 cri.go:89] found id: ""
	I0314 19:28:29.939752  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.939763  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:29.939770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:29.939837  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:29.978277  992344 cri.go:89] found id: ""
	I0314 19:28:29.978315  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.978328  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:29.978347  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:29.978362  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:30.031723  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:30.031761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:30.046940  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:30.046968  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:30.124190  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:30.124226  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:30.124244  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:30.203448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:30.203488  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:32.756750  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:32.772599  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:32.772679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:32.812033  992344 cri.go:89] found id: ""
	I0314 19:28:32.812061  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.812069  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:32.812076  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:32.812165  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:32.855461  992344 cri.go:89] found id: ""
	I0314 19:28:32.855490  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.855501  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:32.855509  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:32.855575  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:32.900644  992344 cri.go:89] found id: ""
	I0314 19:28:32.900675  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.900686  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:32.900694  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:32.900772  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:32.942120  992344 cri.go:89] found id: ""
	I0314 19:28:32.942155  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.942166  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:32.942175  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:32.942238  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:32.981325  992344 cri.go:89] found id: ""
	I0314 19:28:32.981352  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.981360  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:32.981367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:32.981419  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:33.019732  992344 cri.go:89] found id: ""
	I0314 19:28:33.019767  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.019781  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:33.019789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:33.019852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:33.060205  992344 cri.go:89] found id: ""
	I0314 19:28:33.060262  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.060274  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:33.060283  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:33.060350  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:33.100456  992344 cri.go:89] found id: ""
	I0314 19:28:33.100490  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.100517  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:33.100529  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:33.100548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:33.114637  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:33.114668  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:33.186983  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:33.187010  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:33.187024  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:33.268816  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:33.268856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:33.314600  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:33.314634  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:29.282840  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.783516  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:32.443948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:34.942835  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:33.906912  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.908364  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.870832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:35.886088  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:35.886168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:35.929548  992344 cri.go:89] found id: ""
	I0314 19:28:35.929580  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.929590  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:35.929598  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:35.929675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:35.970315  992344 cri.go:89] found id: ""
	I0314 19:28:35.970351  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.970364  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:35.970372  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:35.970438  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:36.010663  992344 cri.go:89] found id: ""
	I0314 19:28:36.010696  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.010716  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:36.010723  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:36.010806  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:36.055521  992344 cri.go:89] found id: ""
	I0314 19:28:36.055558  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.055569  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:36.055578  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:36.055648  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:36.095768  992344 cri.go:89] found id: ""
	I0314 19:28:36.095799  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.095810  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:36.095821  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:36.095875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:36.132820  992344 cri.go:89] found id: ""
	I0314 19:28:36.132848  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.132856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:36.132861  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:36.132915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:36.173162  992344 cri.go:89] found id: ""
	I0314 19:28:36.173196  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.173209  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:36.173217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:36.173287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:36.211796  992344 cri.go:89] found id: ""
	I0314 19:28:36.211822  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.211830  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:36.211839  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:36.211854  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:36.271494  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:36.271536  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:36.289341  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:36.289366  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:36.368331  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:36.368361  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:36.368378  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:36.448945  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:36.448993  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:34.283005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.286755  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.781678  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.943412  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:39.450650  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.407910  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:40.409015  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:42.906420  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.995675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:39.009626  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:39.009705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:39.051085  992344 cri.go:89] found id: ""
	I0314 19:28:39.051119  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.051128  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:39.051134  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:39.051184  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:39.090167  992344 cri.go:89] found id: ""
	I0314 19:28:39.090201  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.090214  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:39.090221  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:39.090293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:39.129345  992344 cri.go:89] found id: ""
	I0314 19:28:39.129388  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.129404  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:39.129411  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:39.129475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:39.166678  992344 cri.go:89] found id: ""
	I0314 19:28:39.166731  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.166741  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:39.166750  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:39.166822  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:39.206329  992344 cri.go:89] found id: ""
	I0314 19:28:39.206368  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.206381  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:39.206389  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:39.206442  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:39.245158  992344 cri.go:89] found id: ""
	I0314 19:28:39.245187  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.245196  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:39.245202  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:39.245253  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:39.289207  992344 cri.go:89] found id: ""
	I0314 19:28:39.289243  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.289259  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:39.289267  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:39.289335  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:39.327437  992344 cri.go:89] found id: ""
	I0314 19:28:39.327462  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.327472  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:39.327484  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:39.327500  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:39.381681  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:39.381724  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:39.397060  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:39.397097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:39.482718  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:39.482744  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:39.482761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:39.566304  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:39.566349  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.111937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:42.126968  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:42.127033  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:42.168671  992344 cri.go:89] found id: ""
	I0314 19:28:42.168701  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.168713  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:42.168721  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:42.168792  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:42.213285  992344 cri.go:89] found id: ""
	I0314 19:28:42.213311  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.213319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:42.213325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:42.213388  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:42.255036  992344 cri.go:89] found id: ""
	I0314 19:28:42.255075  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.255085  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:42.255090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:42.255159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:42.296863  992344 cri.go:89] found id: ""
	I0314 19:28:42.296896  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.296907  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:42.296915  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:42.296978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:42.338346  992344 cri.go:89] found id: ""
	I0314 19:28:42.338402  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.338413  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:42.338421  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:42.338489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:42.374667  992344 cri.go:89] found id: ""
	I0314 19:28:42.374691  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.374699  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:42.374711  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:42.374774  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:42.412676  992344 cri.go:89] found id: ""
	I0314 19:28:42.412702  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.412713  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:42.412721  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:42.412786  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:42.451093  992344 cri.go:89] found id: ""
	I0314 19:28:42.451125  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.451135  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:42.451147  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:42.451162  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:42.531130  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:42.531176  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.576583  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:42.576623  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:42.633675  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:42.633715  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:42.650154  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:42.650188  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:42.731282  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:41.282876  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.283770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:41.942349  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.942831  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.943723  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:44.907134  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:46.907817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.231813  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:45.246939  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:45.247029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:45.289033  992344 cri.go:89] found id: ""
	I0314 19:28:45.289057  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.289066  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:45.289071  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:45.289128  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:45.327007  992344 cri.go:89] found id: ""
	I0314 19:28:45.327034  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.327043  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:45.327048  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:45.327109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:45.363725  992344 cri.go:89] found id: ""
	I0314 19:28:45.363757  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.363770  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:45.363778  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:45.363833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:45.400775  992344 cri.go:89] found id: ""
	I0314 19:28:45.400808  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.400819  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:45.400826  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:45.400887  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:45.438717  992344 cri.go:89] found id: ""
	I0314 19:28:45.438750  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.438762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:45.438770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:45.438833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:45.483296  992344 cri.go:89] found id: ""
	I0314 19:28:45.483334  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.483349  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:45.483355  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:45.483406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:45.519840  992344 cri.go:89] found id: ""
	I0314 19:28:45.519872  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.519881  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:45.519887  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:45.519939  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:45.560535  992344 cri.go:89] found id: ""
	I0314 19:28:45.560565  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.560577  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:45.560590  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:45.560613  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:45.639453  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:45.639476  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:45.639489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:45.724224  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:45.724265  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:45.768456  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:45.768494  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:45.828111  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:45.828154  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.345352  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:48.358823  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:48.358879  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:48.401545  992344 cri.go:89] found id: ""
	I0314 19:28:48.401575  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.401586  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:48.401595  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:48.401655  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:48.442031  992344 cri.go:89] found id: ""
	I0314 19:28:48.442062  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.442073  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:48.442081  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:48.442186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:48.481192  992344 cri.go:89] found id: ""
	I0314 19:28:48.481230  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.481239  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:48.481245  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:48.481309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:48.522127  992344 cri.go:89] found id: ""
	I0314 19:28:48.522162  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.522171  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:48.522177  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:48.522233  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:48.562763  992344 cri.go:89] found id: ""
	I0314 19:28:48.562791  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.562800  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:48.562806  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:48.562866  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:48.606256  992344 cri.go:89] found id: ""
	I0314 19:28:48.606290  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.606300  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:48.606309  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:48.606376  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:48.645493  992344 cri.go:89] found id: ""
	I0314 19:28:48.645527  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.645539  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:48.645547  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:48.645634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:48.686145  992344 cri.go:89] found id: ""
	I0314 19:28:48.686177  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.686189  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:48.686202  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:48.686229  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.701771  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:48.701812  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:28:45.784389  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.283921  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.443564  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.445062  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.909434  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.910456  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:28:48.783905  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:48.783931  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:48.783947  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:48.863824  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:48.863868  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:48.919421  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:48.919456  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:51.491562  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:51.507427  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:51.507494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:51.549290  992344 cri.go:89] found id: ""
	I0314 19:28:51.549325  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.549337  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:51.549344  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:51.549415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:51.587540  992344 cri.go:89] found id: ""
	I0314 19:28:51.587575  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.587588  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:51.587595  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:51.587663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:51.629187  992344 cri.go:89] found id: ""
	I0314 19:28:51.629221  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.629229  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:51.629235  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:51.629299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:51.670884  992344 cri.go:89] found id: ""
	I0314 19:28:51.670913  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.670921  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:51.670927  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:51.670978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:51.712751  992344 cri.go:89] found id: ""
	I0314 19:28:51.712783  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.712794  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:51.712802  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:51.712873  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:51.751462  992344 cri.go:89] found id: ""
	I0314 19:28:51.751490  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.751499  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:51.751505  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:51.751572  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:51.793049  992344 cri.go:89] found id: ""
	I0314 19:28:51.793079  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.793090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:51.793098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:51.793166  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:51.834793  992344 cri.go:89] found id: ""
	I0314 19:28:51.834825  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.834837  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:51.834850  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:51.834871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:51.851743  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:51.851792  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:51.927748  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:51.927768  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:51.927780  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:52.011674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:52.011718  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:52.067015  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:52.067059  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:50.783127  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.783450  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.942964  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:54.945540  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:53.407301  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:55.907357  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:56.900342  992056 pod_ready.go:81] duration metric: took 4m0.000959023s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	E0314 19:28:56.900373  992056 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:28:56.900392  992056 pod_ready.go:38] duration metric: took 4m15.050031566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:28:56.900431  992056 kubeadm.go:591] duration metric: took 4m22.457881244s to restartPrimaryControlPlane
	W0314 19:28:56.900513  992056 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:28:56.900549  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:28:54.623820  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:54.641380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:54.641459  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:54.699381  992344 cri.go:89] found id: ""
	I0314 19:28:54.699418  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.699430  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:54.699439  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:54.699507  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:54.752793  992344 cri.go:89] found id: ""
	I0314 19:28:54.752843  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.752865  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:54.752873  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:54.752980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:54.805116  992344 cri.go:89] found id: ""
	I0314 19:28:54.805148  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.805158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:54.805166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:54.805231  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:54.843303  992344 cri.go:89] found id: ""
	I0314 19:28:54.843336  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.843346  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:54.843352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:54.843406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:54.879789  992344 cri.go:89] found id: ""
	I0314 19:28:54.879822  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.879834  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:54.879840  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:54.879911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:54.921874  992344 cri.go:89] found id: ""
	I0314 19:28:54.921903  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.921913  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:54.921921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:54.922005  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:54.966098  992344 cri.go:89] found id: ""
	I0314 19:28:54.966129  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.966137  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:54.966146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:54.966201  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:55.005963  992344 cri.go:89] found id: ""
	I0314 19:28:55.005995  992344 logs.go:276] 0 containers: []
	W0314 19:28:55.006006  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:55.006019  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:55.006035  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:55.063802  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:55.063838  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:55.079126  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:55.079157  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:55.156174  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:55.156200  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:55.156241  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:55.237471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:55.237517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:57.786574  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:57.804359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:57.804446  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:57.843520  992344 cri.go:89] found id: ""
	I0314 19:28:57.843554  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.843566  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:57.843574  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:57.843642  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:57.883350  992344 cri.go:89] found id: ""
	I0314 19:28:57.883385  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.883398  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:57.883408  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:57.883502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:57.926544  992344 cri.go:89] found id: ""
	I0314 19:28:57.926578  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.926589  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:57.926597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:57.926674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:57.969832  992344 cri.go:89] found id: ""
	I0314 19:28:57.969861  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.969873  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:57.969880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:57.969951  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:58.021915  992344 cri.go:89] found id: ""
	I0314 19:28:58.021952  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.021964  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:58.021972  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:58.022043  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:58.068004  992344 cri.go:89] found id: ""
	I0314 19:28:58.068045  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.068059  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:58.068067  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:58.068147  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:58.109350  992344 cri.go:89] found id: ""
	I0314 19:28:58.109385  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.109397  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:58.109405  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:58.109474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:58.149505  992344 cri.go:89] found id: ""
	I0314 19:28:58.149600  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.149617  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:58.149631  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:58.149648  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:58.165051  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:58.165097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:58.260306  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:58.260334  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:58.260360  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:58.347229  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:58.347270  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:58.394506  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:58.394546  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:54.783620  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.282809  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.444954  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:59.450968  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.452967  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:00.965332  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:00.982169  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:29:00.982254  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:29:01.023125  992344 cri.go:89] found id: ""
	I0314 19:29:01.023161  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.023174  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:29:01.023182  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:29:01.023258  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:29:01.073622  992344 cri.go:89] found id: ""
	I0314 19:29:01.073663  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.073688  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:29:01.073697  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:29:01.073762  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:29:01.128431  992344 cri.go:89] found id: ""
	I0314 19:29:01.128459  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.128468  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:29:01.128474  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:29:01.128538  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:29:01.175167  992344 cri.go:89] found id: ""
	I0314 19:29:01.175196  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.175214  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:29:01.175222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:29:01.175287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:29:01.219999  992344 cri.go:89] found id: ""
	I0314 19:29:01.220030  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.220041  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:29:01.220049  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:29:01.220114  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:29:01.267917  992344 cri.go:89] found id: ""
	I0314 19:29:01.267946  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.267954  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:29:01.267961  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:29:01.268010  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:29:01.308402  992344 cri.go:89] found id: ""
	I0314 19:29:01.308437  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.308450  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:29:01.308457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:29:01.308527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:29:01.354953  992344 cri.go:89] found id: ""
	I0314 19:29:01.354982  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.354991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:29:01.355001  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:29:01.355016  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:29:01.409088  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:29:01.409131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:29:01.424936  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:29:01.424965  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:29:01.517636  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:29:01.517673  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:29:01.517691  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:29:01.632674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:29:01.632731  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:59.284185  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.783757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:03.943195  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:05.943902  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:04.185418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:04.199946  992344 kubeadm.go:591] duration metric: took 4m3.891459486s to restartPrimaryControlPlane
	W0314 19:29:04.200023  992344 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:04.200050  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:05.838695  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.638615727s)
	I0314 19:29:05.838799  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:05.858457  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:05.870547  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:05.881784  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:05.881805  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:05.881853  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:05.892847  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:05.892892  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:05.904430  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:05.914971  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:05.915037  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:05.925984  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.935559  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:05.935615  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.947405  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:05.958132  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:05.958177  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:05.968975  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:06.219425  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:04.283772  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:06.785802  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:07.950776  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:10.445766  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:09.282584  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:11.783655  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:12.942948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.944204  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.282089  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:16.282920  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:18.283071  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:17.446142  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:19.447118  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:21.447921  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:20.284298  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:22.782760  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:23.452826  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:25.944013  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:24.785109  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:27.282770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:28.443907  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:30.447194  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:29.271454  992056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.370871229s)
	I0314 19:29:29.271543  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:29.288947  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:29.299822  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:29.309955  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:29.309972  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:29.310004  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:29.320229  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:29.320285  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:29.331509  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:29.342985  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:29.343046  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:29.352805  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.363317  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:29.363376  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.374226  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:29.384400  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:29.384444  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:29.394962  992056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:29.631020  992056 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:29.283297  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:31.782029  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:33.783415  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:32.447974  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:34.943668  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:35.786587  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.282404  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.891396  992056 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:29:38.891457  992056 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:29:38.891550  992056 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:29:38.891703  992056 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:29:38.891857  992056 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:29:38.891965  992056 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:29:38.893298  992056 out.go:204]   - Generating certificates and keys ...
	I0314 19:29:38.893420  992056 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:29:38.893526  992056 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:29:38.893637  992056 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:29:38.893727  992056 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:29:38.893833  992056 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:29:38.893931  992056 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:29:38.894042  992056 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:29:38.894147  992056 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:29:38.894249  992056 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:29:38.894351  992056 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:29:38.894413  992056 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:29:38.894483  992056 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:29:38.894564  992056 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:29:38.894648  992056 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:29:38.894740  992056 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:29:38.894825  992056 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:29:38.894942  992056 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:29:38.895027  992056 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:29:38.896425  992056 out.go:204]   - Booting up control plane ...
	I0314 19:29:38.896545  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:29:38.896665  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:29:38.896773  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:29:38.896879  992056 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:29:38.896980  992056 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:29:38.897045  992056 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:29:38.897200  992056 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:29:38.897278  992056 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.504738 seconds
	I0314 19:29:38.897390  992056 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:29:38.897574  992056 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:29:38.897680  992056 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:29:38.897920  992056 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-992669 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:29:38.897993  992056 kubeadm.go:309] [bootstrap-token] Using token: wr0inu.l2vxagywmdawjzpm
	I0314 19:29:38.899387  992056 out.go:204]   - Configuring RBAC rules ...
	I0314 19:29:38.899518  992056 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:29:38.899597  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:29:38.899790  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:29:38.899950  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:29:38.900097  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:29:38.900225  992056 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:29:38.900389  992056 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:29:38.900449  992056 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:29:38.900514  992056 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:29:38.900523  992056 kubeadm.go:309] 
	I0314 19:29:38.900615  992056 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:29:38.900638  992056 kubeadm.go:309] 
	I0314 19:29:38.900743  992056 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:29:38.900753  992056 kubeadm.go:309] 
	I0314 19:29:38.900788  992056 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:29:38.900872  992056 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:29:38.900945  992056 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:29:38.900954  992056 kubeadm.go:309] 
	I0314 19:29:38.901031  992056 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:29:38.901042  992056 kubeadm.go:309] 
	I0314 19:29:38.901111  992056 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:29:38.901124  992056 kubeadm.go:309] 
	I0314 19:29:38.901202  992056 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:29:38.901312  992056 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:29:38.901433  992056 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:29:38.901446  992056 kubeadm.go:309] 
	I0314 19:29:38.901523  992056 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:29:38.901614  992056 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:29:38.901624  992056 kubeadm.go:309] 
	I0314 19:29:38.901743  992056 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.901842  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:29:38.901862  992056 kubeadm.go:309] 	--control-plane 
	I0314 19:29:38.901865  992056 kubeadm.go:309] 
	I0314 19:29:38.901933  992056 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:29:38.901943  992056 kubeadm.go:309] 
	I0314 19:29:38.902025  992056 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.902185  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:29:38.902212  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:29:38.902222  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:29:38.903643  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:29:36.944055  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.945642  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:39.437026  992563 pod_ready.go:81] duration metric: took 4m0.000967236s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	E0314 19:29:39.437057  992563 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:29:39.437072  992563 pod_ready.go:38] duration metric: took 4m7.55729252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:39.437098  992563 kubeadm.go:591] duration metric: took 4m15.521374831s to restartPrimaryControlPlane
	W0314 19:29:39.437168  992563 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:39.437200  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:38.904945  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:29:38.921860  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:29:38.958963  992056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:29:38.959064  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:38.959065  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-992669 minikube.k8s.io/updated_at=2024_03_14T19_29_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=embed-certs-992669 minikube.k8s.io/primary=true
	I0314 19:29:39.310627  992056 ops.go:34] apiserver oom_adj: -16
	I0314 19:29:39.310807  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:39.811730  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.311090  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.811674  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.311488  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.811640  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.310976  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.811336  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.283716  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:42.784841  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:43.311472  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:43.811668  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.311072  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.811108  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.311743  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.811197  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.311720  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.810955  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.311810  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.811633  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.282898  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:47.786855  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:48.310845  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:48.811747  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.310862  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.811100  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.311383  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.811660  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.311496  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.565143  992056 kubeadm.go:1106] duration metric: took 12.606155275s to wait for elevateKubeSystemPrivileges
	W0314 19:29:51.565200  992056 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:29:51.565210  992056 kubeadm.go:393] duration metric: took 5m17.173193727s to StartCluster
	I0314 19:29:51.565243  992056 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.565344  992056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:29:51.567430  992056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.567800  992056 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:29:51.570366  992056 out.go:177] * Verifying Kubernetes components...
	I0314 19:29:51.567870  992056 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:29:51.568004  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:29:51.571834  992056 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-992669"
	I0314 19:29:51.571847  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:29:51.571872  992056 addons.go:69] Setting metrics-server=true in profile "embed-certs-992669"
	I0314 19:29:51.571922  992056 addons.go:234] Setting addon metrics-server=true in "embed-certs-992669"
	W0314 19:29:51.571942  992056 addons.go:243] addon metrics-server should already be in state true
	I0314 19:29:51.571981  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571884  992056 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-992669"
	W0314 19:29:51.572025  992056 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:29:51.572056  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571842  992056 addons.go:69] Setting default-storageclass=true in profile "embed-certs-992669"
	I0314 19:29:51.572143  992056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-992669"
	I0314 19:29:51.572563  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572578  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572597  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572567  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572611  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572665  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.595116  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0314 19:29:51.595142  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0314 19:29:51.595156  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
	I0314 19:29:51.595736  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595837  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595892  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.596363  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596382  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596560  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596545  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596788  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.596895  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597022  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597213  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.597463  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597488  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597536  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.597498  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.601587  992056 addons.go:234] Setting addon default-storageclass=true in "embed-certs-992669"
	W0314 19:29:51.601612  992056 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:29:51.601644  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.602034  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.602069  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.613696  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I0314 19:29:51.614277  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.614924  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.614957  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.615340  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.615518  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.616192  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0314 19:29:51.616643  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.617453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.617661  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.617680  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.619738  992056 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:29:51.618228  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.621267  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:29:51.621284  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:29:51.621299  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.619984  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.622057  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I0314 19:29:51.622533  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.623169  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.623184  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.623511  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.623600  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.625179  992056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:29:51.625183  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.627022  992056 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:51.627052  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:29:51.627074  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.624457  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.627169  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.625872  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.627276  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.626052  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.627505  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.628272  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.628593  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.630213  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630764  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.630788  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630870  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.631065  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.631483  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.631681  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.645022  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0314 19:29:51.645562  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.646147  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.646172  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.646551  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.646766  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.648424  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.648674  992056 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:51.648690  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:29:51.648702  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.651513  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652188  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.652197  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.652220  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652395  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.652552  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.652655  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.845568  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:29:51.865551  992056 node_ready.go:35] waiting up to 6m0s for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875093  992056 node_ready.go:49] node "embed-certs-992669" has status "Ready":"True"
	I0314 19:29:51.875111  992056 node_ready.go:38] duration metric: took 9.53464ms for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875123  992056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:51.883535  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:51.979907  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:52.034281  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:29:52.034312  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:29:52.060831  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:52.124847  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:29:52.124885  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:29:52.289209  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:52.289239  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:29:52.374833  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:50.286539  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:52.298408  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:53.393013  992056 pod_ready.go:92] pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.393048  992056 pod_ready.go:81] duration metric: took 1.509482935s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.393060  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401449  992056 pod_ready.go:92] pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.401476  992056 pod_ready.go:81] duration metric: took 8.407286ms for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401486  992056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406465  992056 pod_ready.go:92] pod "etcd-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.406492  992056 pod_ready.go:81] duration metric: took 4.997468ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406502  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412923  992056 pod_ready.go:92] pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.412954  992056 pod_ready.go:81] duration metric: took 6.441869ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412966  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469519  992056 pod_ready.go:92] pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.469552  992056 pod_ready.go:81] duration metric: took 56.57628ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469566  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.582001  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.602041099s)
	I0314 19:29:53.582078  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582096  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.582484  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582500  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582521  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582795  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582813  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582853  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.590184  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.590202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.590451  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.590487  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.886717  992056 pod_ready.go:92] pod "kube-proxy-hzhsp" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.886741  992056 pod_ready.go:81] duration metric: took 417.167569ms for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.886751  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.965815  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.904943117s)
	I0314 19:29:53.965875  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.965887  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.966214  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.966240  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.966239  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.966249  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.966305  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.967958  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.968169  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.968187  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.996956  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.622074464s)
	I0314 19:29:53.997019  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997033  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997356  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.997378  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997400  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997415  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997427  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997740  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997758  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997771  992056 addons.go:470] Verifying addon metrics-server=true in "embed-certs-992669"
	I0314 19:29:53.999390  992056 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:29:54.000743  992056 addons.go:505] duration metric: took 2.432877042s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:29:54.270407  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:54.270432  992056 pod_ready.go:81] duration metric: took 383.674695ms for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:54.270440  992056 pod_ready.go:38] duration metric: took 2.395303637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:54.270455  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:29:54.270521  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:54.293083  992056 api_server.go:72] duration metric: took 2.725234796s to wait for apiserver process to appear ...
	I0314 19:29:54.293113  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:29:54.293164  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:29:54.302466  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:29:54.304317  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:29:54.304342  992056 api_server.go:131] duration metric: took 11.220873ms to wait for apiserver health ...
	I0314 19:29:54.304353  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:29:54.479241  992056 system_pods.go:59] 9 kube-system pods found
	I0314 19:29:54.479276  992056 system_pods.go:61] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.479282  992056 system_pods.go:61] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.479288  992056 system_pods.go:61] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.479294  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.479299  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.479305  992056 system_pods.go:61] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.479310  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.479318  992056 system_pods.go:61] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.479325  992056 system_pods.go:61] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.479340  992056 system_pods.go:74] duration metric: took 174.978725ms to wait for pod list to return data ...
	I0314 19:29:54.479358  992056 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:29:54.668682  992056 default_sa.go:45] found service account: "default"
	I0314 19:29:54.668714  992056 default_sa.go:55] duration metric: took 189.346747ms for default service account to be created ...
	I0314 19:29:54.668727  992056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:29:54.873128  992056 system_pods.go:86] 9 kube-system pods found
	I0314 19:29:54.873161  992056 system_pods.go:89] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.873169  992056 system_pods.go:89] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.873175  992056 system_pods.go:89] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.873184  992056 system_pods.go:89] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.873189  992056 system_pods.go:89] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.873194  992056 system_pods.go:89] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.873199  992056 system_pods.go:89] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.873211  992056 system_pods.go:89] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.873222  992056 system_pods.go:89] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.873244  992056 system_pods.go:126] duration metric: took 204.509108ms to wait for k8s-apps to be running ...
	I0314 19:29:54.873256  992056 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:29:54.873311  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:54.890288  992056 system_svc.go:56] duration metric: took 17.021036ms WaitForService to wait for kubelet
	I0314 19:29:54.890320  992056 kubeadm.go:576] duration metric: took 3.322477642s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:29:54.890347  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:29:55.069429  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:29:55.069458  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:29:55.069506  992056 node_conditions.go:105] duration metric: took 179.148222ms to run NodePressure ...
	I0314 19:29:55.069521  992056 start.go:240] waiting for startup goroutines ...
	I0314 19:29:55.069529  992056 start.go:245] waiting for cluster config update ...
	I0314 19:29:55.069543  992056 start.go:254] writing updated cluster config ...
	I0314 19:29:55.069881  992056 ssh_runner.go:195] Run: rm -f paused
	I0314 19:29:55.129829  992056 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:29:55.131816  992056 out.go:177] * Done! kubectl is now configured to use "embed-certs-992669" cluster and "default" namespace by default
	I0314 19:29:54.784171  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:57.281882  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:59.282486  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:01.282694  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:03.782088  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:05.785281  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:08.282878  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:10.782495  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:12.785319  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:11.911432  992563 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.474198952s)
	I0314 19:30:11.911536  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:11.930130  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:30:11.942380  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:30:11.954695  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:30:11.954724  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:30:11.954795  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:30:11.966696  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:30:11.966772  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:30:11.980074  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:30:11.991635  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:30:11.991728  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:30:12.004984  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.016196  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:30:12.016271  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.027974  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:30:12.039057  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:30:12.039110  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:30:12.050231  992563 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:30:12.272978  992563 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:30:15.284464  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:16.784336  991880 pod_ready.go:81] duration metric: took 4m0.008931629s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:30:16.784369  991880 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 19:30:16.784378  991880 pod_ready.go:38] duration metric: took 4m4.558023355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:16.784398  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:16.784436  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:16.784511  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:16.853550  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:16.853582  991880 cri.go:89] found id: ""
	I0314 19:30:16.853592  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:16.853657  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.858963  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:16.859036  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:16.920573  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:16.920607  991880 cri.go:89] found id: ""
	I0314 19:30:16.920618  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:16.920686  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.926133  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:16.926193  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:16.972150  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:16.972184  991880 cri.go:89] found id: ""
	I0314 19:30:16.972192  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:16.972276  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.979169  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:16.979247  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:17.028161  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:17.028191  991880 cri.go:89] found id: ""
	I0314 19:30:17.028202  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:17.028290  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.034573  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:17.034644  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:17.081031  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.081057  991880 cri.go:89] found id: ""
	I0314 19:30:17.081067  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:17.081132  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.086182  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:17.086254  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:17.124758  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.124793  991880 cri.go:89] found id: ""
	I0314 19:30:17.124804  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:17.124892  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.130576  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:17.130636  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:17.180055  991880 cri.go:89] found id: ""
	I0314 19:30:17.180088  991880 logs.go:276] 0 containers: []
	W0314 19:30:17.180100  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:17.180107  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:17.180174  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:17.227751  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.227785  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.227790  991880 cri.go:89] found id: ""
	I0314 19:30:17.227800  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:17.227859  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.232614  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.237357  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:17.237385  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.300841  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:17.300884  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.363775  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:17.363812  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.419276  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:17.419328  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.461722  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:17.461764  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:17.519105  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:17.519147  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:17.535065  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:17.535099  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:17.588603  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:17.588642  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:17.641770  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:17.641803  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:18.180497  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:18.180561  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:18.250700  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:18.250736  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:18.422627  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:18.422668  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:18.484021  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:18.484059  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.765051  992563 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:30:21.765146  992563 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:30:21.765261  992563 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:30:21.765420  992563 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:30:21.765550  992563 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:30:21.765636  992563 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:30:21.767199  992563 out.go:204]   - Generating certificates and keys ...
	I0314 19:30:21.767291  992563 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:30:21.767371  992563 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:30:21.767473  992563 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:30:21.767548  992563 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:30:21.767636  992563 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:30:21.767703  992563 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:30:21.767787  992563 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:30:21.767864  992563 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:30:21.767957  992563 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:30:21.768051  992563 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:30:21.768098  992563 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:30:21.768170  992563 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:30:21.768260  992563 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:30:21.768327  992563 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:30:21.768407  992563 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:30:21.768487  992563 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:30:21.768592  992563 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:30:21.768685  992563 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:30:21.769989  992563 out.go:204]   - Booting up control plane ...
	I0314 19:30:21.770111  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:30:21.770213  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:30:21.770295  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:30:21.770428  992563 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:30:21.770580  992563 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:30:21.770654  992563 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:30:21.770844  992563 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:30:21.770958  992563 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503303 seconds
	I0314 19:30:21.771087  992563 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:30:21.771238  992563 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:30:21.771320  992563 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:30:21.771547  992563 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-440341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:30:21.771634  992563 kubeadm.go:309] [bootstrap-token] Using token: tk83yg.fwvaicx2eo9i68ac
	I0314 19:30:21.773288  992563 out.go:204]   - Configuring RBAC rules ...
	I0314 19:30:21.773428  992563 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:30:21.773532  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:30:21.773732  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:30:21.773914  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:30:21.774068  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:30:21.774180  992563 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:30:21.774338  992563 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:30:21.774402  992563 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:30:21.774464  992563 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:30:21.774472  992563 kubeadm.go:309] 
	I0314 19:30:21.774577  992563 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:30:21.774601  992563 kubeadm.go:309] 
	I0314 19:30:21.774705  992563 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:30:21.774715  992563 kubeadm.go:309] 
	I0314 19:30:21.774744  992563 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:30:21.774833  992563 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:30:21.774914  992563 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:30:21.774930  992563 kubeadm.go:309] 
	I0314 19:30:21.775008  992563 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:30:21.775033  992563 kubeadm.go:309] 
	I0314 19:30:21.775102  992563 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:30:21.775112  992563 kubeadm.go:309] 
	I0314 19:30:21.775191  992563 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:30:21.775311  992563 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:30:21.775407  992563 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:30:21.775416  992563 kubeadm.go:309] 
	I0314 19:30:21.775537  992563 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:30:21.775654  992563 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:30:21.775664  992563 kubeadm.go:309] 
	I0314 19:30:21.775774  992563 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.775940  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:30:21.775971  992563 kubeadm.go:309] 	--control-plane 
	I0314 19:30:21.775977  992563 kubeadm.go:309] 
	I0314 19:30:21.776088  992563 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:30:21.776096  992563 kubeadm.go:309] 
	I0314 19:30:21.776235  992563 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.776419  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:30:21.776441  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:30:21.776451  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:30:21.778042  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:30:21.037583  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:21.055977  991880 api_server.go:72] duration metric: took 4m16.560286182s to wait for apiserver process to appear ...
	I0314 19:30:21.056002  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:21.056039  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:21.056088  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:21.101556  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:21.101583  991880 cri.go:89] found id: ""
	I0314 19:30:21.101591  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:21.101640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.107192  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:21.107259  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:21.156580  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:21.156608  991880 cri.go:89] found id: ""
	I0314 19:30:21.156619  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:21.156681  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.162119  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:21.162277  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:21.204270  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:21.204295  991880 cri.go:89] found id: ""
	I0314 19:30:21.204304  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:21.204369  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.208987  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:21.209057  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:21.258998  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.259019  991880 cri.go:89] found id: ""
	I0314 19:30:21.259029  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:21.259094  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.264179  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:21.264264  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:21.314180  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.314213  991880 cri.go:89] found id: ""
	I0314 19:30:21.314225  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:21.314293  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.319693  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:21.319758  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:21.364936  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.364974  991880 cri.go:89] found id: ""
	I0314 19:30:21.364987  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:21.365061  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.370463  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:21.370531  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:21.411930  991880 cri.go:89] found id: ""
	I0314 19:30:21.411963  991880 logs.go:276] 0 containers: []
	W0314 19:30:21.411974  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:21.411982  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:21.412053  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:21.467849  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:21.467875  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.467881  991880 cri.go:89] found id: ""
	I0314 19:30:21.467891  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:21.467954  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.474463  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.480322  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:21.480351  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.532746  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:21.532778  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.599065  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:21.599115  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.655522  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:21.655563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:22.097480  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:22.097521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:22.154520  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:22.154563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:22.175274  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:22.175312  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:22.302831  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:22.302865  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:22.353974  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:22.354017  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:22.392220  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:22.392263  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:22.433863  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:22.433893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:22.474014  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:22.474047  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:22.522023  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:22.522056  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:21.779300  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:30:21.841175  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:30:21.937053  992563 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:30:21.937114  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:21.937131  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-440341 minikube.k8s.io/updated_at=2024_03_14T19_30_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=default-k8s-diff-port-440341 minikube.k8s.io/primary=true
	I0314 19:30:22.169862  992563 ops.go:34] apiserver oom_adj: -16
	I0314 19:30:22.169890  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:22.670591  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.170361  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.670786  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.170313  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.670779  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.169961  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.670821  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:26.170263  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.081288  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:30:25.086127  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:30:25.087542  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:30:25.087569  991880 api_server.go:131] duration metric: took 4.031556019s to wait for apiserver health ...
	I0314 19:30:25.087578  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:25.087598  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:25.087646  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:25.136716  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.136743  991880 cri.go:89] found id: ""
	I0314 19:30:25.136754  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:25.136818  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.142319  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:25.142382  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:25.188007  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.188031  991880 cri.go:89] found id: ""
	I0314 19:30:25.188040  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:25.188098  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.192982  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:25.193056  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:25.235462  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.235485  991880 cri.go:89] found id: ""
	I0314 19:30:25.235493  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:25.235543  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.239980  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:25.240048  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:25.288524  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.288545  991880 cri.go:89] found id: ""
	I0314 19:30:25.288554  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:25.288604  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.294625  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:25.294680  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:25.332862  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:25.332884  991880 cri.go:89] found id: ""
	I0314 19:30:25.332891  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:25.332949  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.337918  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:25.337993  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:25.379537  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:25.379569  991880 cri.go:89] found id: ""
	I0314 19:30:25.379578  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:25.379640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.385396  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:25.385471  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:25.426549  991880 cri.go:89] found id: ""
	I0314 19:30:25.426584  991880 logs.go:276] 0 containers: []
	W0314 19:30:25.426596  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:25.426603  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:25.426676  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:25.468021  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:25.468048  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.468054  991880 cri.go:89] found id: ""
	I0314 19:30:25.468065  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:25.468134  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.473277  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.477669  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:25.477690  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:25.530477  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:25.530521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.586949  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:25.586985  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.629933  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:25.629972  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.675919  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:25.675955  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.724439  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:25.724477  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:25.790827  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:25.790864  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:25.808176  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:25.808223  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:25.925583  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:25.925621  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.972184  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:25.972237  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:26.018051  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:26.018083  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:26.080100  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:26.080141  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:26.117235  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:26.117276  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:29.005596  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:30:29.005628  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.005632  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.005636  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.005639  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.005642  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.005645  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.005651  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.005657  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.005665  991880 system_pods.go:74] duration metric: took 3.918081505s to wait for pod list to return data ...
	I0314 19:30:29.005672  991880 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:29.008145  991880 default_sa.go:45] found service account: "default"
	I0314 19:30:29.008172  991880 default_sa.go:55] duration metric: took 2.493629ms for default service account to be created ...
	I0314 19:30:29.008181  991880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:29.013603  991880 system_pods.go:86] 8 kube-system pods found
	I0314 19:30:29.013629  991880 system_pods.go:89] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.013641  991880 system_pods.go:89] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.013646  991880 system_pods.go:89] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.013650  991880 system_pods.go:89] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.013654  991880 system_pods.go:89] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.013658  991880 system_pods.go:89] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.013665  991880 system_pods.go:89] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.013673  991880 system_pods.go:89] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.013683  991880 system_pods.go:126] duration metric: took 5.49627ms to wait for k8s-apps to be running ...
	I0314 19:30:29.013692  991880 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:29.013744  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:29.033211  991880 system_svc.go:56] duration metric: took 19.509127ms WaitForService to wait for kubelet
	I0314 19:30:29.033244  991880 kubeadm.go:576] duration metric: took 4m24.537554048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:29.033262  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:29.036387  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:29.036409  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:29.036419  991880 node_conditions.go:105] duration metric: took 3.152496ms to run NodePressure ...
	I0314 19:30:29.036432  991880 start.go:240] waiting for startup goroutines ...
	I0314 19:30:29.036441  991880 start.go:245] waiting for cluster config update ...
	I0314 19:30:29.036455  991880 start.go:254] writing updated cluster config ...
	I0314 19:30:29.036755  991880 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:29.086638  991880 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 19:30:29.088767  991880 out.go:177] * Done! kubectl is now configured to use "no-preload-731976" cluster and "default" namespace by default
	I0314 19:30:26.670634  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.170774  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.670460  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.170571  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.670199  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.170324  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.670849  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.170021  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.670974  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.170929  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.670790  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.170127  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.670598  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.170188  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.670057  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.800524  992563 kubeadm.go:1106] duration metric: took 11.863480183s to wait for elevateKubeSystemPrivileges
	W0314 19:30:33.800567  992563 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:30:33.800577  992563 kubeadm.go:393] duration metric: took 5m9.94050972s to StartCluster
	I0314 19:30:33.800600  992563 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.800688  992563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:30:33.802311  992563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.802593  992563 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:30:33.804369  992563 out.go:177] * Verifying Kubernetes components...
	I0314 19:30:33.802658  992563 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:30:33.802827  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:30:33.806017  992563 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806030  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:30:33.806039  992563 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806035  992563 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806064  992563 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-440341"
	I0314 19:30:33.806070  992563 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.806078  992563 addons.go:243] addon storage-provisioner should already be in state true
	W0314 19:30:33.806079  992563 addons.go:243] addon metrics-server should already be in state true
	I0314 19:30:33.806077  992563 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-440341"
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806494  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806502  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806518  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806535  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806588  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806621  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.822764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0314 19:30:33.823247  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.823804  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.823832  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.824297  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.824872  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.824921  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.826625  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0314 19:30:33.826764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0314 19:30:33.827172  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827235  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827776  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827802  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.827915  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827936  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.828247  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.828442  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.829152  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.829934  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.829979  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.832093  992563 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.832117  992563 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:30:33.832150  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.832523  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.832567  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.847051  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I0314 19:30:33.847665  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.848345  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.848364  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.848731  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.848903  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.850545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.852435  992563 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:30:33.851181  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I0314 19:30:33.852606  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44445
	I0314 19:30:33.853975  992563 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:33.853991  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:30:33.854010  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.852808  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854344  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854857  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.854878  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855189  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.855207  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855286  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855613  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855925  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.856281  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.856302  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.857423  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.857734  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.859391  992563 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:30:33.858135  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.858383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.860539  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:30:33.860554  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:30:33.860561  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.860569  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.860651  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.860790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.860935  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.862967  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863319  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.863339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863428  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.863627  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.863738  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.863908  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.880826  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I0314 19:30:33.881345  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.881799  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.881818  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.882187  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.882341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.884263  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.884589  992563 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:33.884607  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:30:33.884625  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.887557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.887921  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.887945  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.888190  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.888503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.888670  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.888773  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:34.034473  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:30:34.090558  992563 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129103  992563 node_ready.go:49] node "default-k8s-diff-port-440341" has status "Ready":"True"
	I0314 19:30:34.129135  992563 node_ready.go:38] duration metric: took 38.535795ms for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129148  992563 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:34.137612  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:34.186085  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:30:34.186105  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:30:34.218932  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:34.220858  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:30:34.220881  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:30:34.235535  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:34.356161  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:34.356196  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:30:34.486555  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:36.162952  992563 pod_ready.go:92] pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.162991  992563 pod_ready.go:81] duration metric: took 2.025345367s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.163005  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171143  992563 pod_ready.go:92] pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.171227  992563 pod_ready.go:81] duration metric: took 8.211826ms for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171254  992563 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182222  992563 pod_ready.go:92] pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.182246  992563 pod_ready.go:81] duration metric: took 10.963779ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182255  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196349  992563 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.196375  992563 pod_ready.go:81] duration metric: took 14.113911ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196385  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201427  992563 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.201448  992563 pod_ready.go:81] duration metric: took 5.056279ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201456  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.470967  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.235390903s)
	I0314 19:30:36.471092  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471113  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471179  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.252178888s)
	I0314 19:30:36.471229  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471250  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471529  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471565  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471565  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471576  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471583  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471605  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471626  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471589  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471854  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471876  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.472161  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.472167  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.472186  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.491529  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.491557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.491867  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.491887  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.546393  992563 pod_ready.go:92] pod "kube-proxy-h7hdc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.546418  992563 pod_ready.go:81] duration metric: took 344.955471ms for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.546427  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.619091  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.132488028s)
	I0314 19:30:36.619147  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619165  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619443  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619459  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619469  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619477  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619809  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619839  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619847  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.619851  992563 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-440341"
	I0314 19:30:36.621595  992563 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 19:30:36.622935  992563 addons.go:505] duration metric: took 2.820276683s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 19:30:36.950079  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.950112  992563 pod_ready.go:81] duration metric: took 403.67651ms for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.950124  992563 pod_ready.go:38] duration metric: took 2.820962547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:36.950145  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:36.950212  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:37.024696  992563 api_server.go:72] duration metric: took 3.222061457s to wait for apiserver process to appear ...
	I0314 19:30:37.024728  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:37.024754  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:30:37.031369  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:30:37.033114  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:30:37.033137  992563 api_server.go:131] duration metric: took 8.40225ms to wait for apiserver health ...
	I0314 19:30:37.033145  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:37.157219  992563 system_pods.go:59] 9 kube-system pods found
	I0314 19:30:37.157256  992563 system_pods.go:61] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.157263  992563 system_pods.go:61] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.157269  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.157276  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.157282  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.157286  992563 system_pods.go:61] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.157291  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.157300  992563 system_pods.go:61] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.157308  992563 system_pods.go:61] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.157323  992563 system_pods.go:74] duration metric: took 124.170301ms to wait for pod list to return data ...
	I0314 19:30:37.157336  992563 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:37.343573  992563 default_sa.go:45] found service account: "default"
	I0314 19:30:37.343602  992563 default_sa.go:55] duration metric: took 186.253477ms for default service account to be created ...
	I0314 19:30:37.343620  992563 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:37.549907  992563 system_pods.go:86] 9 kube-system pods found
	I0314 19:30:37.549947  992563 system_pods.go:89] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.549955  992563 system_pods.go:89] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.549962  992563 system_pods.go:89] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.549969  992563 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.549977  992563 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.549982  992563 system_pods.go:89] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.549987  992563 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.549998  992563 system_pods.go:89] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.550007  992563 system_pods.go:89] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.550022  992563 system_pods.go:126] duration metric: took 206.393584ms to wait for k8s-apps to be running ...
	I0314 19:30:37.550039  992563 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:37.550098  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:37.568337  992563 system_svc.go:56] duration metric: took 18.290339ms WaitForService to wait for kubelet
	I0314 19:30:37.568369  992563 kubeadm.go:576] duration metric: took 3.765742034s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:37.568396  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:37.747892  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:37.747931  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:37.747945  992563 node_conditions.go:105] duration metric: took 179.543321ms to run NodePressure ...
	I0314 19:30:37.747959  992563 start.go:240] waiting for startup goroutines ...
	I0314 19:30:37.747969  992563 start.go:245] waiting for cluster config update ...
	I0314 19:30:37.747984  992563 start.go:254] writing updated cluster config ...
	I0314 19:30:37.748310  992563 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:37.800491  992563 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:30:37.802410  992563 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-440341" cluster and "default" namespace by default
	I0314 19:31:02.414037  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:31:02.414153  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:31:02.415801  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:02.415891  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:02.415997  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:02.416110  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:02.416236  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:02.416324  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:02.418205  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:02.418304  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:02.418377  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:02.418455  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:02.418519  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:02.418629  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:02.418704  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:02.418793  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:02.418892  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:02.419018  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:02.419129  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:02.419184  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:02.419270  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:02.419347  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:02.419421  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:02.419528  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:02.419624  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:02.419808  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:02.419914  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:02.419951  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:02.420007  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:02.421520  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:02.421603  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:02.421669  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:02.421753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:02.421844  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:02.422023  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:02.422092  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:02.422167  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422353  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422458  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422731  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422812  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422970  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423032  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423228  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423333  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423479  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423488  992344 kubeadm.go:309] 
	I0314 19:31:02.423519  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:31:02.423552  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:31:02.423558  992344 kubeadm.go:309] 
	I0314 19:31:02.423601  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:31:02.423643  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:31:02.423770  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:31:02.423780  992344 kubeadm.go:309] 
	I0314 19:31:02.423912  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:31:02.423949  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:31:02.424001  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:31:02.424012  992344 kubeadm.go:309] 
	I0314 19:31:02.424141  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:31:02.424269  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:31:02.424280  992344 kubeadm.go:309] 
	I0314 19:31:02.424405  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:31:02.424481  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:31:02.424542  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:31:02.424606  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:31:02.424638  992344 kubeadm.go:309] 
	W0314 19:31:02.424800  992344 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 19:31:02.424887  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:31:03.827325  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.402406647s)
	I0314 19:31:03.827421  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:31:03.845125  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:31:03.856796  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:31:03.856821  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:31:03.856875  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:31:03.868304  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:31:03.868359  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:31:03.879608  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:31:03.891002  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:31:03.891068  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:31:03.902543  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.913159  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:31:03.913212  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.926194  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:31:03.937276  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:31:03.937344  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:31:03.949719  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:31:04.026772  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:04.026841  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:04.195658  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:04.195816  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:04.195973  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:04.416776  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:04.418845  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:04.418937  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:04.419023  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:04.419125  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:04.419222  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:04.419321  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:04.419386  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:04.419869  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:04.420376  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:04.420786  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:04.421265  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:04.421447  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:04.421551  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:04.472916  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:04.572160  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:04.802131  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:04.892115  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:04.908810  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:04.910191  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:04.910266  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:05.076124  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:05.078423  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:05.078564  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:05.083626  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:05.083753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:05.084096  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:05.088164  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:45.090977  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:45.091099  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:45.091378  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:50.091571  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:50.091787  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:00.093031  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:00.093312  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:20.094443  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:20.094650  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096632  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:33:00.096929  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096948  992344 kubeadm.go:309] 
	I0314 19:33:00.096986  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:33:00.097021  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:33:00.097030  992344 kubeadm.go:309] 
	I0314 19:33:00.097059  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:33:00.097088  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:33:00.097203  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:33:00.097228  992344 kubeadm.go:309] 
	I0314 19:33:00.097345  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:33:00.097394  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:33:00.097451  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:33:00.097461  992344 kubeadm.go:309] 
	I0314 19:33:00.097572  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:33:00.097673  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:33:00.097685  992344 kubeadm.go:309] 
	I0314 19:33:00.097865  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:33:00.098003  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:33:00.098105  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:33:00.098202  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:33:00.098221  992344 kubeadm.go:309] 
	I0314 19:33:00.098939  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:33:00.099069  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:33:00.099160  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:33:00.099254  992344 kubeadm.go:393] duration metric: took 7m59.845612375s to StartCluster
	I0314 19:33:00.099339  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:33:00.099422  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:33:00.151833  992344 cri.go:89] found id: ""
	I0314 19:33:00.151861  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.151869  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:33:00.151876  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:33:00.151943  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:33:00.196473  992344 cri.go:89] found id: ""
	I0314 19:33:00.196508  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.196519  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:33:00.196526  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:33:00.196595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:33:00.233150  992344 cri.go:89] found id: ""
	I0314 19:33:00.233193  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.233207  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:33:00.233217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:33:00.233292  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:33:00.273142  992344 cri.go:89] found id: ""
	I0314 19:33:00.273183  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.273196  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:33:00.273205  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:33:00.273274  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:33:00.311472  992344 cri.go:89] found id: ""
	I0314 19:33:00.311510  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.311523  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:33:00.311544  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:33:00.311618  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:33:00.352110  992344 cri.go:89] found id: ""
	I0314 19:33:00.352138  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.352146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:33:00.352152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:33:00.352230  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:33:00.399016  992344 cri.go:89] found id: ""
	I0314 19:33:00.399050  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.399060  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:33:00.399068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:33:00.399140  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:33:00.436808  992344 cri.go:89] found id: ""
	I0314 19:33:00.436844  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.436857  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:33:00.436871  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:33:00.436889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:33:00.487696  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:33:00.487732  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:33:00.503591  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:33:00.503624  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:33:00.586980  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:33:00.587014  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:33:00.587033  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:33:00.697747  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:33:00.697805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 19:33:00.767728  992344 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 19:33:00.767799  992344 out.go:239] * 
	W0314 19:33:00.768013  992344 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.768052  992344 out.go:239] * 
	W0314 19:33:00.769333  992344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:33:00.772897  992344 out.go:177] 
	W0314 19:33:00.774102  992344 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.774165  992344 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 19:33:00.774200  992344 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 19:33:00.775839  992344 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.891713024Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445325891680464,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1b3dc89c-f184-478f-8343-fb2789a6ca3f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.892493719Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=293831ef-ff96-4480-a949-757859ec8380 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.892543377Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=293831ef-ff96-4480-a949-757859ec8380 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.892581591Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=293831ef-ff96-4480-a949-757859ec8380 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.928718538Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=4c50a1dd-abf4-4462-b023-54846d3359d2 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.928847436Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=4c50a1dd-abf4-4462-b023-54846d3359d2 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.930871036Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=2d16b785-d6ff-4a7e-910e-69aae4611fd6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.931421006Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445325931392258,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=2d16b785-d6ff-4a7e-910e-69aae4611fd6 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.931948029Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=62264a21-3eee-4d88-a213-256e8a7e3c90 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.932045130Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=62264a21-3eee-4d88-a213-256e8a7e3c90 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.932172873Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=62264a21-3eee-4d88-a213-256e8a7e3c90 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.967839493Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=73f512ce-0456-4e4e-8e6c-b21171af3067 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.967934953Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=73f512ce-0456-4e4e-8e6c-b21171af3067 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.969207055Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=5257f72b-7d53-4000-ae1e-d14915959d65 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.969630739Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445325969582959,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=5257f72b-7d53-4000-ae1e-d14915959d65 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.970308662Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=ea3c555b-5f23-4459-9456-16de2fd3141f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.970389210Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=ea3c555b-5f23-4459-9456-16de2fd3141f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:42:05 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:05.970434364Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=ea3c555b-5f23-4459-9456-16de2fd3141f name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:42:06 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:06.008369164Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=419a0865-7bf2-483c-bd82-48da60340288 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:42:06 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:06.008501999Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=419a0865-7bf2-483c-bd82-48da60340288 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:42:06 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:06.009818941Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=914d87bf-910d-4aeb-b4b6-17e71275c97b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:42:06 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:06.010509157Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445326010472266,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=914d87bf-910d-4aeb-b4b6-17e71275c97b name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:42:06 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:06.011615943Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1ece6fa8-44e0-4368-a542-30a990f5ccfd name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:42:06 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:06.011661658Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1ece6fa8-44e0-4368-a542-30a990f5ccfd name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:42:06 old-k8s-version-968094 crio[643]: time="2024-03-14 19:42:06.011695207Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1ece6fa8-44e0-4368-a542-30a990f5ccfd name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar14 19:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056380] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043046] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.730295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.383233] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.688071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.628591] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.061195] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075909] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.206378] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.171886] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.322045] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +7.118781] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[  +0.064097] kauditd_printk_skb: 130 callbacks suppressed
	[Mar14 19:25] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +8.646452] kauditd_printk_skb: 46 callbacks suppressed
	[Mar14 19:29] systemd-fstab-generator[4997]: Ignoring "noauto" option for root device
	[Mar14 19:31] systemd-fstab-generator[5276]: Ignoring "noauto" option for root device
	[  +0.079625] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:42:06 up 17 min,  0 users,  load average: 0.02, 0.08, 0.07
	Linux old-k8s-version-968094 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]: net/http.(*Transport).dialConnFor(0xc0004e5400, 0xc000436840)
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]:         /usr/local/go/src/net/http/transport.go:1421 +0xc6
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]: created by net/http.(*Transport).queueForDial
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]:         /usr/local/go/src/net/http/transport.go:1390 +0x40f
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]: goroutine 125 [syscall]:
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]: syscall.Syscall6(0xe8, 0xb, 0xc000c17b6c, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0, 0x0, 0x0)
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]:         /usr/local/go/src/syscall/asm_linux_amd64.s:41 +0x5
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]: k8s.io/kubernetes/vendor/golang.org/x/sys/unix.EpollWait(0xb, 0xc000c17b6c, 0x7, 0x7, 0xffffffffffffffff, 0x0, 0x0, 0x0)
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/golang.org/x/sys/unix/zsyscall_linux_amd64.go:76 +0x72
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*fdPoller).wait(0xc00090b300, 0x0, 0x0, 0x0)
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify_poller.go:86 +0x91
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]: k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.(*Watcher).readEvents(0xc0008f1db0)
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:192 +0x206
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]: created by k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify.NewWatcher
	Mar 14 19:42:03 old-k8s-version-968094 kubelet[6446]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/fsnotify/fsnotify/inotify.go:59 +0x1a8
	Mar 14 19:42:03 old-k8s-version-968094 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 14 19:42:03 old-k8s-version-968094 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 14 19:42:04 old-k8s-version-968094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 114.
	Mar 14 19:42:04 old-k8s-version-968094 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 14 19:42:04 old-k8s-version-968094 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 14 19:42:04 old-k8s-version-968094 kubelet[6455]: I0314 19:42:04.241231    6455 server.go:416] Version: v1.20.0
	Mar 14 19:42:04 old-k8s-version-968094 kubelet[6455]: I0314 19:42:04.241482    6455 server.go:837] Client rotation is on, will bootstrap in background
	Mar 14 19:42:04 old-k8s-version-968094 kubelet[6455]: I0314 19:42:04.244321    6455 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 14 19:42:04 old-k8s-version-968094 kubelet[6455]: W0314 19:42:04.245383    6455 manager.go:159] Cannot detect current cgroup on cgroup v2
	Mar 14 19:42:04 old-k8s-version-968094 kubelet[6455]: I0314 19:42:04.246188    6455 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-968094 -n old-k8s-version-968094
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 2 (273.609128ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-968094" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/UserAppExistsAfterStop (543.31s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (396.45s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-992669 -n embed-certs-992669
start_stop_delete_test.go:287: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-14 19:45:34.0888022 +0000 UTC m=+6060.574661078
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context embed-certs-992669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context embed-certs-992669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.704µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context embed-certs-992669 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992669 -n embed-certs-992669
helpers_test.go:244: <<< TestStartStop/group/embed-certs/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p embed-certs-992669 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p embed-certs-992669 logs -n 25: (1.425850817s)
helpers_test.go:252: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:18 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-731976             | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-992669            | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC | 14 Mar 24 19:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-440341  | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC | 14 Mar 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-968094        | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-731976                  | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-992669                 | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-968094             | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-440341       | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:21 UTC | 14 Mar 24 19:30 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:43 UTC | 14 Mar 24 19:43 UTC |
	| start   | -p newest-cni-549136 --memory=2200 --alsologtostderr   | newest-cni-549136            | jenkins | v1.32.0 | 14 Mar 24 19:43 UTC | 14 Mar 24 19:44 UTC |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:44 UTC | 14 Mar 24 19:44 UTC |
	| start   | -p auto-058224 --memory=3072                           | auto-058224                  | jenkins | v1.32.0 | 14 Mar 24 19:44 UTC |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --wait-timeout=15m                                     |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p newest-cni-549136             | newest-cni-549136            | jenkins | v1.32.0 | 14 Mar 24 19:44 UTC | 14 Mar 24 19:45 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p newest-cni-549136                                   | newest-cni-549136            | jenkins | v1.32.0 | 14 Mar 24 19:45 UTC | 14 Mar 24 19:45 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p newest-cni-549136                  | newest-cni-549136            | jenkins | v1.32.0 | 14 Mar 24 19:45 UTC | 14 Mar 24 19:45 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p newest-cni-549136 --memory=2200 --alsologtostderr   | newest-cni-549136            | jenkins | v1.32.0 | 14 Mar 24 19:45 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:45:11
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:45:11.428423  998093 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:45:11.428712  998093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:45:11.428723  998093 out.go:304] Setting ErrFile to fd 2...
	I0314 19:45:11.428728  998093 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:45:11.428978  998093 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:45:11.429621  998093 out.go:298] Setting JSON to false
	I0314 19:45:11.430595  998093 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":98863,"bootTime":1710346648,"procs":206,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:45:11.430662  998093 start.go:139] virtualization: kvm guest
	I0314 19:45:11.432991  998093 out.go:177] * [newest-cni-549136] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:45:11.434468  998093 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:45:11.435904  998093 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:45:11.434507  998093 notify.go:220] Checking for updates...
	I0314 19:45:11.438380  998093 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:45:11.439738  998093 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:45:11.441035  998093 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:45:11.442210  998093 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:45:11.443753  998093 config.go:182] Loaded profile config "newest-cni-549136": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:45:11.444201  998093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:45:11.444274  998093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:45:11.459521  998093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40819
	I0314 19:45:11.460054  998093 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:45:11.460658  998093 main.go:141] libmachine: Using API Version  1
	I0314 19:45:11.460681  998093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:45:11.461101  998093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:45:11.461326  998093 main.go:141] libmachine: (newest-cni-549136) Calling .DriverName
	I0314 19:45:11.461624  998093 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:45:11.461908  998093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:45:11.461962  998093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:45:11.476591  998093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43013
	I0314 19:45:11.477041  998093 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:45:11.477597  998093 main.go:141] libmachine: Using API Version  1
	I0314 19:45:11.477627  998093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:45:11.477963  998093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:45:11.478180  998093 main.go:141] libmachine: (newest-cni-549136) Calling .DriverName
	I0314 19:45:11.513138  998093 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 19:45:11.514430  998093 start.go:297] selected driver: kvm2
	I0314 19:45:11.514445  998093 start.go:901] validating driver "kvm2" against &{Name:newest-cni-549136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.29.0-rc.2 ClusterName:newest-cni-549136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.244 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pod
s:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:45:11.514590  998093 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:45:11.515534  998093 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:45:11.515620  998093 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:45:11.530517  998093 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:45:11.530948  998093 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0314 19:45:11.530992  998093 cni.go:84] Creating CNI manager for ""
	I0314 19:45:11.531007  998093 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:45:11.531054  998093 start.go:340] cluster config:
	{Name:newest-cni-549136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-549136 Namespace:default APIS
erverHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.244 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddres
s: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:45:11.531236  998093 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:45:11.533047  998093 out.go:177] * Starting "newest-cni-549136" primary control-plane node in "newest-cni-549136" cluster
	I0314 19:45:11.534348  998093 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 19:45:11.534388  998093 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0314 19:45:11.534418  998093 cache.go:56] Caching tarball of preloaded images
	I0314 19:45:11.534507  998093 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:45:11.534518  998093 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0314 19:45:11.534615  998093 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/newest-cni-549136/config.json ...
	I0314 19:45:11.534793  998093 start.go:360] acquireMachinesLock for newest-cni-549136: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:45:11.534831  998093 start.go:364] duration metric: took 21.384µs to acquireMachinesLock for "newest-cni-549136"
	I0314 19:45:11.534845  998093 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:45:11.534852  998093 fix.go:54] fixHost starting: 
	I0314 19:45:11.535097  998093 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:45:11.535127  998093 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:45:11.550552  998093 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38503
	I0314 19:45:11.551051  998093 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:45:11.553238  998093 main.go:141] libmachine: Using API Version  1
	I0314 19:45:11.553271  998093 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:45:11.553687  998093 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:45:11.554207  998093 main.go:141] libmachine: (newest-cni-549136) Calling .DriverName
	I0314 19:45:11.554392  998093 main.go:141] libmachine: (newest-cni-549136) Calling .GetState
	I0314 19:45:11.556098  998093 fix.go:112] recreateIfNeeded on newest-cni-549136: state=Stopped err=<nil>
	I0314 19:45:11.556168  998093 main.go:141] libmachine: (newest-cni-549136) Calling .DriverName
	W0314 19:45:11.556355  998093 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:45:11.558418  998093 out.go:177] * Restarting existing kvm2 VM for "newest-cni-549136" ...
	I0314 19:45:11.619810  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:12.119802  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:12.619925  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:13.120614  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:13.619959  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:14.120597  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:14.620797  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:15.120656  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:15.619789  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:16.120601  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:11.559744  998093 main.go:141] libmachine: (newest-cni-549136) Calling .Start
	I0314 19:45:11.559931  998093 main.go:141] libmachine: (newest-cni-549136) Ensuring networks are active...
	I0314 19:45:11.560880  998093 main.go:141] libmachine: (newest-cni-549136) Ensuring network default is active
	I0314 19:45:11.561197  998093 main.go:141] libmachine: (newest-cni-549136) Ensuring network mk-newest-cni-549136 is active
	I0314 19:45:11.561656  998093 main.go:141] libmachine: (newest-cni-549136) Getting domain xml...
	I0314 19:45:11.562512  998093 main.go:141] libmachine: (newest-cni-549136) Creating domain...
	I0314 19:45:12.844616  998093 main.go:141] libmachine: (newest-cni-549136) Waiting to get IP...
	I0314 19:45:12.845684  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:12.846222  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:12.846294  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:12.846193  998129 retry.go:31] will retry after 251.307326ms: waiting for machine to come up
	I0314 19:45:13.098637  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:13.099120  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:13.099150  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:13.099077  998129 retry.go:31] will retry after 266.089213ms: waiting for machine to come up
	I0314 19:45:13.366649  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:13.367171  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:13.367204  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:13.367109  998129 retry.go:31] will retry after 477.894601ms: waiting for machine to come up
	I0314 19:45:13.846809  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:13.847418  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:13.847449  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:13.847373  998129 retry.go:31] will retry after 510.236763ms: waiting for machine to come up
	I0314 19:45:14.359158  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:14.359715  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:14.359746  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:14.359664  998129 retry.go:31] will retry after 533.64969ms: waiting for machine to come up
	I0314 19:45:14.895565  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:14.896049  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:14.896079  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:14.896000  998129 retry.go:31] will retry after 630.626474ms: waiting for machine to come up
	I0314 19:45:15.528580  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:15.529069  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:15.529092  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:15.529047  998129 retry.go:31] will retry after 960.224298ms: waiting for machine to come up
	I0314 19:45:16.620407  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:17.119875  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:17.619760  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:18.120674  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:18.620375  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:19.120281  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:19.619793  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:20.120517  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:20.619965  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:21.120665  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:16.491201  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:16.491680  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:16.491713  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:16.491626  998129 retry.go:31] will retry after 1.192419524s: waiting for machine to come up
	I0314 19:45:17.685467  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:17.686071  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:17.686135  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:17.686035  998129 retry.go:31] will retry after 1.837550389s: waiting for machine to come up
	I0314 19:45:19.525463  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:19.526089  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:19.526119  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:19.526020  998129 retry.go:31] will retry after 1.528330551s: waiting for machine to come up
	I0314 19:45:21.056157  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:21.056707  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:21.056746  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:21.056645  998129 retry.go:31] will retry after 1.805461452s: waiting for machine to come up
	I0314 19:45:21.620380  997589 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:45:21.821804  997589 kubeadm.go:1106] duration metric: took 13.011045049s to wait for elevateKubeSystemPrivileges
	W0314 19:45:21.821854  997589 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:45:21.821864  997589 kubeadm.go:393] duration metric: took 24.320052541s to StartCluster
	I0314 19:45:21.821888  997589 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:45:21.821974  997589 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:45:21.823926  997589 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:45:21.824174  997589 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.39.96 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:45:21.825933  997589 out.go:177] * Verifying Kubernetes components...
	I0314 19:45:21.824363  997589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 19:45:21.824387  997589 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:45:21.824586  997589 config.go:182] Loaded profile config "auto-058224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:45:21.827307  997589 addons.go:69] Setting storage-provisioner=true in profile "auto-058224"
	I0314 19:45:21.827324  997589 addons.go:69] Setting default-storageclass=true in profile "auto-058224"
	I0314 19:45:21.827387  997589 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "auto-058224"
	I0314 19:45:21.827904  997589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:45:21.827969  997589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:45:21.827340  997589 addons.go:234] Setting addon storage-provisioner=true in "auto-058224"
	I0314 19:45:21.828185  997589 host.go:66] Checking if "auto-058224" exists ...
	I0314 19:45:21.827341  997589 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:45:21.828582  997589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:45:21.828652  997589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:45:21.849802  997589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44277
	I0314 19:45:21.849802  997589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43713
	I0314 19:45:21.850406  997589 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:45:21.850548  997589 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:45:21.851030  997589 main.go:141] libmachine: Using API Version  1
	I0314 19:45:21.851048  997589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:45:21.851191  997589 main.go:141] libmachine: Using API Version  1
	I0314 19:45:21.851201  997589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:45:21.851621  997589 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:45:21.851749  997589 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:45:21.851986  997589 main.go:141] libmachine: (auto-058224) Calling .GetState
	I0314 19:45:21.852561  997589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:45:21.852605  997589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:45:21.858913  997589 addons.go:234] Setting addon default-storageclass=true in "auto-058224"
	I0314 19:45:21.858957  997589 host.go:66] Checking if "auto-058224" exists ...
	I0314 19:45:21.859342  997589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:45:21.859397  997589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:45:21.877563  997589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37493
	I0314 19:45:21.877575  997589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45129
	I0314 19:45:21.878152  997589 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:45:21.878212  997589 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:45:21.878728  997589 main.go:141] libmachine: Using API Version  1
	I0314 19:45:21.878752  997589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:45:21.878867  997589 main.go:141] libmachine: Using API Version  1
	I0314 19:45:21.878883  997589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:45:21.879185  997589 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:45:21.879208  997589 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:45:21.879412  997589 main.go:141] libmachine: (auto-058224) Calling .GetState
	I0314 19:45:21.879906  997589 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:45:21.879950  997589 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:45:21.881345  997589 main.go:141] libmachine: (auto-058224) Calling .DriverName
	I0314 19:45:21.883224  997589 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:45:21.884843  997589 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:45:21.884865  997589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:45:21.884883  997589 main.go:141] libmachine: (auto-058224) Calling .GetSSHHostname
	I0314 19:45:21.887993  997589 main.go:141] libmachine: (auto-058224) DBG | domain auto-058224 has defined MAC address 52:54:00:bb:0e:33 in network mk-auto-058224
	I0314 19:45:21.889046  997589 main.go:141] libmachine: (auto-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:0e:33", ip: ""} in network mk-auto-058224: {Iface:virbr1 ExpiryTime:2024-03-14 20:44:41 +0000 UTC Type:0 Mac:52:54:00:bb:0e:33 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:auto-058224 Clientid:01:52:54:00:bb:0e:33}
	I0314 19:45:21.889086  997589 main.go:141] libmachine: (auto-058224) DBG | domain auto-058224 has defined IP address 192.168.39.96 and MAC address 52:54:00:bb:0e:33 in network mk-auto-058224
	I0314 19:45:21.889505  997589 main.go:141] libmachine: (auto-058224) Calling .GetSSHPort
	I0314 19:45:21.889733  997589 main.go:141] libmachine: (auto-058224) Calling .GetSSHKeyPath
	I0314 19:45:21.889888  997589 main.go:141] libmachine: (auto-058224) Calling .GetSSHUsername
	I0314 19:45:21.890100  997589 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/auto-058224/id_rsa Username:docker}
	I0314 19:45:21.901729  997589 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38307
	I0314 19:45:21.902257  997589 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:45:21.902754  997589 main.go:141] libmachine: Using API Version  1
	I0314 19:45:21.902773  997589 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:45:21.903135  997589 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:45:21.903280  997589 main.go:141] libmachine: (auto-058224) Calling .GetState
	I0314 19:45:21.904996  997589 main.go:141] libmachine: (auto-058224) Calling .DriverName
	I0314 19:45:21.905213  997589 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:45:21.905227  997589 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:45:21.905240  997589 main.go:141] libmachine: (auto-058224) Calling .GetSSHHostname
	I0314 19:45:21.908352  997589 main.go:141] libmachine: (auto-058224) DBG | domain auto-058224 has defined MAC address 52:54:00:bb:0e:33 in network mk-auto-058224
	I0314 19:45:21.908767  997589 main.go:141] libmachine: (auto-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:bb:0e:33", ip: ""} in network mk-auto-058224: {Iface:virbr1 ExpiryTime:2024-03-14 20:44:41 +0000 UTC Type:0 Mac:52:54:00:bb:0e:33 Iaid: IPaddr:192.168.39.96 Prefix:24 Hostname:auto-058224 Clientid:01:52:54:00:bb:0e:33}
	I0314 19:45:21.908793  997589 main.go:141] libmachine: (auto-058224) DBG | domain auto-058224 has defined IP address 192.168.39.96 and MAC address 52:54:00:bb:0e:33 in network mk-auto-058224
	I0314 19:45:21.909046  997589 main.go:141] libmachine: (auto-058224) Calling .GetSSHPort
	I0314 19:45:21.909209  997589 main.go:141] libmachine: (auto-058224) Calling .GetSSHKeyPath
	I0314 19:45:21.909361  997589 main.go:141] libmachine: (auto-058224) Calling .GetSSHUsername
	I0314 19:45:21.909519  997589 sshutil.go:53] new ssh client: &{IP:192.168.39.96 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/auto-058224/id_rsa Username:docker}
	I0314 19:45:22.212434  997589 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:45:22.212807  997589 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 19:45:22.312100  997589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:45:22.360945  997589 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:45:23.731356  997589 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.39.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (1.518504594s)
	I0314 19:45:23.731404  997589 start.go:948] {"host.minikube.internal": 192.168.39.1} host record injected into CoreDNS's ConfigMap
	I0314 19:45:23.731417  997589 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (1.518940333s)
	I0314 19:45:23.733261  997589 node_ready.go:35] waiting up to 15m0s for node "auto-058224" to be "Ready" ...
	I0314 19:45:23.747343  997589 node_ready.go:49] node "auto-058224" has status "Ready":"True"
	I0314 19:45:23.747376  997589 node_ready.go:38] duration metric: took 14.081061ms for node "auto-058224" to be "Ready" ...
	I0314 19:45:23.747403  997589 pod_ready.go:35] extra waiting up to 15m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:45:23.760712  997589 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-ffm29" in "kube-system" namespace to be "Ready" ...
	I0314 19:45:23.993000  997589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.680844661s)
	I0314 19:45:23.993076  997589 main.go:141] libmachine: Making call to close driver server
	I0314 19:45:23.993104  997589 main.go:141] libmachine: (auto-058224) Calling .Close
	I0314 19:45:23.993020  997589 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.632034548s)
	I0314 19:45:23.993194  997589 main.go:141] libmachine: Making call to close driver server
	I0314 19:45:23.993212  997589 main.go:141] libmachine: (auto-058224) Calling .Close
	I0314 19:45:23.993632  997589 main.go:141] libmachine: (auto-058224) DBG | Closing plugin on server side
	I0314 19:45:23.993641  997589 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:45:23.993714  997589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:45:23.993724  997589 main.go:141] libmachine: Making call to close driver server
	I0314 19:45:23.993757  997589 main.go:141] libmachine: (auto-058224) Calling .Close
	I0314 19:45:23.993657  997589 main.go:141] libmachine: (auto-058224) DBG | Closing plugin on server side
	I0314 19:45:23.993653  997589 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:45:23.993815  997589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:45:23.993830  997589 main.go:141] libmachine: Making call to close driver server
	I0314 19:45:23.993842  997589 main.go:141] libmachine: (auto-058224) Calling .Close
	I0314 19:45:23.994149  997589 main.go:141] libmachine: (auto-058224) DBG | Closing plugin on server side
	I0314 19:45:23.994195  997589 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:45:23.994212  997589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:45:23.995665  997589 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:45:23.995683  997589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:45:24.023308  997589 main.go:141] libmachine: Making call to close driver server
	I0314 19:45:24.023331  997589 main.go:141] libmachine: (auto-058224) Calling .Close
	I0314 19:45:24.023719  997589 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:45:24.023745  997589 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:45:24.025534  997589 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0314 19:45:24.026872  997589 addons.go:505] duration metric: took 2.20249557s for enable addons: enabled=[storage-provisioner default-storageclass]
	I0314 19:45:24.238680  997589 kapi.go:248] "coredns" deployment in "kube-system" namespace and "auto-058224" context rescaled to 1 replicas
	I0314 19:45:25.264946  997589 pod_ready.go:97] error getting pod "coredns-5dd5756b68-ffm29" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ffm29" not found
	I0314 19:45:25.264982  997589 pod_ready.go:81] duration metric: took 1.504240126s for pod "coredns-5dd5756b68-ffm29" in "kube-system" namespace to be "Ready" ...
	E0314 19:45:25.264997  997589 pod_ready.go:66] WaitExtra: waitPodCondition: error getting pod "coredns-5dd5756b68-ffm29" in "kube-system" namespace (skipping!): pods "coredns-5dd5756b68-ffm29" not found
	I0314 19:45:25.265005  997589 pod_ready.go:78] waiting up to 15m0s for pod "coredns-5dd5756b68-mmglq" in "kube-system" namespace to be "Ready" ...
	I0314 19:45:22.864740  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:22.865532  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:22.865568  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:22.865459  998129 retry.go:31] will retry after 2.386643418s: waiting for machine to come up
	I0314 19:45:25.253355  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:25.253769  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:25.253808  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:25.253748  998129 retry.go:31] will retry after 2.765713901s: waiting for machine to come up
	I0314 19:45:27.272886  997589 pod_ready.go:102] pod "coredns-5dd5756b68-mmglq" in "kube-system" namespace has status "Ready":"False"
	I0314 19:45:29.773393  997589 pod_ready.go:102] pod "coredns-5dd5756b68-mmglq" in "kube-system" namespace has status "Ready":"False"
	I0314 19:45:28.021508  998093 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:45:28.022036  998093 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:45:28.022069  998093 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:45:28.021980  998129 retry.go:31] will retry after 5.647350202s: waiting for machine to come up
	
	
	==> CRI-O <==
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.865923302Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e468421ed202cd300bb4d6659e1cd2405ae7fc04ad047d6df03203c2c35779,PodSandboxId:590e1e020909544e80888385a3a6e9a4a65ab091fb2e3fbf7817d108c5440b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444594414417209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f65c725-e834-45db-a417-fd47b421c883,},Annotations:map[string]string{io.kubernetes.container.hash: e977df94,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9523d7c5c9a7abfa65d76f957050e677a374c5ed0c5a7440a11cc619618619ea,PodSandboxId:fccdedb983d8afb40fe6a2baa347240db88df69d64ef19adec025bf96d2ca5a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444592078861422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tn7lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf62479b-d5f9-4020-950d-8f3d71e952fa,},Annotations:map[string]string{io.kubernetes.container.hash: 336fe2df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49df2538e8babc977ed31e995dbc173abd54f4d40f8aef4c59ecc4d80fe64ad,PodSandboxId:ef507fb570b82d812f74134bab74d57a9d64e8d03c785d6110c30f4e749002ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444591936782447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ngbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
85a72f9-bb81-4f35-97ec-585c80194c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6764b19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aeb16f6338f306b68a150304056969afbb7d0e130b07af8c684fdc7f6ecbe7b,PodSandboxId:9416359048f14476fd5234b4e5febc2c799ebe7972fc02bcfd4f622978d7df3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:
1710444591423864273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzhsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac20e54-9d37-4f3b-a71a-e92c03f806d8,},Annotations:map[string]string{io.kubernetes.container.hash: 95874fd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8482deec524df7fda52136063ae25badc395936701cdb893f30a725340538525,PodSandboxId:a9f7e3967198d453dd990cd3d1ada572b394948fe26adafee07216cb426ef5d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444572230391610,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e53c0267e65fbc4272161062157c5,},Annotations:map[string]string{io.kubernetes.container.hash: 7ef207eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78ee2c608f479e9e330834dd9e001ff560a217bcba64234cf10afc4127dd944,PodSandboxId:53276463df34470726116efabfa927cb36781bd77bd0c17a04d09fba5a6e6888,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444572128919308,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594604683b15223e7fbb56240aa2e26c003c5d84c8ea70cdab7f26a230880e19,PodSandboxId:9e3bc57ca56fe559b7b2fb0b2a72f7013276bb329eb2a1cf22c32d9b909987b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444572123630291,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4a93aa8b6b303c7fe944e4adfcf7f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8e561bab4cec6ce9ca188e60d2d8ab908139face20a893beba3ba24fcfe27f,PodSandboxId:d980318214a5bc7496f7e252be6bf4d97e7f8c3c354ccb5ea83fd920961a3705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444572036609972,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96dbdba166be5bd3f2714cba5cf85166,},Annotations:map[string]string{io.kubernetes.container.hash: da291ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=96cd0759-850e-430b-b916-2068d8576d63 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.915545760Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:nil,}" file="otel-collector/interceptors.go:62" id=735ecb4f-c5d7-4b78-92fb-dfd3b80a51c0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.915810662Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:590e1e020909544e80888385a3a6e9a4a65ab091fb2e3fbf7817d108c5440b46,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3f65c725-e834-45db-a417-fd47b421c883,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444594281240880,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f65c725-e834-45db-a417-fd47b421c883,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-14T19:29:53.956684643Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c94cd013726278cb7285256c5fd2651988d32525e5554402e14780c84e379496,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-kr2n6,Uid:8ef90636-238c-4334-861a-e40c758d012b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444594156595433,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-kr2n6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ef90636-238c-4334-861a-e40c758d012
b,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:29:53.843040820Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef507fb570b82d812f74134bab74d57a9d64e8d03c785d6110c30f4e749002ca,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-ngbmj,Uid:a85a72f9-bb81-4f35-97ec-585c80194c1c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444591420870296,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-ngbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a85a72f9-bb81-4f35-97ec-585c80194c1c,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:29:51.110173291Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fccdedb983d8afb40fe6a2baa347240db88df69d64ef19adec025bf96d2ca5a9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-tn7lt,Uid:bf62479b-d5f9-4020
-950d-8f3d71e952fa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444591395037985,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-tn7lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf62479b-d5f9-4020-950d-8f3d71e952fa,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:29:51.083959480Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9416359048f14476fd5234b4e5febc2c799ebe7972fc02bcfd4f622978d7df3a,Metadata:&PodSandboxMetadata{Name:kube-proxy-hzhsp,Uid:cac20e54-9d37-4f3b-a71a-e92c03f806d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444591277624957,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hzhsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac20e54-9d37-4f3b-a71a-e92c03f806d8,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:29:50.969883241Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53276463df34470726116efabfa927cb36781bd77bd0c17a04d09fba5a6e6888,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-992669,Uid:4f03b52fd3fe2a7fdbaa67f2d69d71ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444571855594216,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,kubernetes.io/config.seen: 2024-03-14T19:29:31.377545685Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e3bc57ca56fe559b7b2fb0b2a72f7013276bb329eb2a1cf22c32d9b909987b1,Metadata:&PodSandboxM
etadata{Name:kube-scheduler-embed-certs-992669,Uid:fa4a93aa8b6b303c7fe944e4adfcf7f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444571850399850,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4a93aa8b6b303c7fe944e4adfcf7f1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fa4a93aa8b6b303c7fe944e4adfcf7f1,kubernetes.io/config.seen: 2024-03-14T19:29:31.377546807Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d980318214a5bc7496f7e252be6bf4d97e7f8c3c354ccb5ea83fd920961a3705,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-992669,Uid:96dbdba166be5bd3f2714cba5cf85166,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444571848642851,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96dbdba166be5bd3f2714cba5cf85166,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.213:8443,kubernetes.io/config.hash: 96dbdba166be5bd3f2714cba5cf85166,kubernetes.io/config.seen: 2024-03-14T19:29:31.377544256Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9f7e3967198d453dd990cd3d1ada572b394948fe26adafee07216cb426ef5d2,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-992669,Uid:7d6e53c0267e65fbc4272161062157c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444571847880027,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e53c0267e65fbc4272161062157c5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.5
0.213:2379,kubernetes.io/config.hash: 7d6e53c0267e65fbc4272161062157c5,kubernetes.io/config.seen: 2024-03-14T19:29:31.377539457Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=735ecb4f-c5d7-4b78-92fb-dfd3b80a51c0 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.916640090Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=21049456-7a27-427b-bf95-0ca449b9a5f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.916725221Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=21049456-7a27-427b-bf95-0ca449b9a5f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.917358590Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e468421ed202cd300bb4d6659e1cd2405ae7fc04ad047d6df03203c2c35779,PodSandboxId:590e1e020909544e80888385a3a6e9a4a65ab091fb2e3fbf7817d108c5440b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444594414417209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f65c725-e834-45db-a417-fd47b421c883,},Annotations:map[string]string{io.kubernetes.container.hash: e977df94,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9523d7c5c9a7abfa65d76f957050e677a374c5ed0c5a7440a11cc619618619ea,PodSandboxId:fccdedb983d8afb40fe6a2baa347240db88df69d64ef19adec025bf96d2ca5a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444592078861422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tn7lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf62479b-d5f9-4020-950d-8f3d71e952fa,},Annotations:map[string]string{io.kubernetes.container.hash: 336fe2df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49df2538e8babc977ed31e995dbc173abd54f4d40f8aef4c59ecc4d80fe64ad,PodSandboxId:ef507fb570b82d812f74134bab74d57a9d64e8d03c785d6110c30f4e749002ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444591936782447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ngbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
85a72f9-bb81-4f35-97ec-585c80194c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6764b19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aeb16f6338f306b68a150304056969afbb7d0e130b07af8c684fdc7f6ecbe7b,PodSandboxId:9416359048f14476fd5234b4e5febc2c799ebe7972fc02bcfd4f622978d7df3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:
1710444591423864273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzhsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac20e54-9d37-4f3b-a71a-e92c03f806d8,},Annotations:map[string]string{io.kubernetes.container.hash: 95874fd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8482deec524df7fda52136063ae25badc395936701cdb893f30a725340538525,PodSandboxId:a9f7e3967198d453dd990cd3d1ada572b394948fe26adafee07216cb426ef5d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444572230391610,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e53c0267e65fbc4272161062157c5,},Annotations:map[string]string{io.kubernetes.container.hash: 7ef207eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78ee2c608f479e9e330834dd9e001ff560a217bcba64234cf10afc4127dd944,PodSandboxId:53276463df34470726116efabfa927cb36781bd77bd0c17a04d09fba5a6e6888,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444572128919308,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594604683b15223e7fbb56240aa2e26c003c5d84c8ea70cdab7f26a230880e19,PodSandboxId:9e3bc57ca56fe559b7b2fb0b2a72f7013276bb329eb2a1cf22c32d9b909987b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444572123630291,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4a93aa8b6b303c7fe944e4adfcf7f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8e561bab4cec6ce9ca188e60d2d8ab908139face20a893beba3ba24fcfe27f,PodSandboxId:d980318214a5bc7496f7e252be6bf4d97e7f8c3c354ccb5ea83fd920961a3705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444572036609972,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96dbdba166be5bd3f2714cba5cf85166,},Annotations:map[string]string{io.kubernetes.container.hash: da291ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=21049456-7a27-427b-bf95-0ca449b9a5f4 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.928135468Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=05bae705-5595-4f2e-a85f-e00345d96a58 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.928253560Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=05bae705-5595-4f2e-a85f-e00345d96a58 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.929700076Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=ae2eac05-0475-4abe-a9c2-801e238c2298 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.930593416Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445534930564636,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=ae2eac05-0475-4abe-a9c2-801e238c2298 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.931402513Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=04b1c64d-ecd8-4a92-8966-43ec4219f334 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.931474961Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=04b1c64d-ecd8-4a92-8966-43ec4219f334 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.931652510Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e468421ed202cd300bb4d6659e1cd2405ae7fc04ad047d6df03203c2c35779,PodSandboxId:590e1e020909544e80888385a3a6e9a4a65ab091fb2e3fbf7817d108c5440b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444594414417209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f65c725-e834-45db-a417-fd47b421c883,},Annotations:map[string]string{io.kubernetes.container.hash: e977df94,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9523d7c5c9a7abfa65d76f957050e677a374c5ed0c5a7440a11cc619618619ea,PodSandboxId:fccdedb983d8afb40fe6a2baa347240db88df69d64ef19adec025bf96d2ca5a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444592078861422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tn7lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf62479b-d5f9-4020-950d-8f3d71e952fa,},Annotations:map[string]string{io.kubernetes.container.hash: 336fe2df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49df2538e8babc977ed31e995dbc173abd54f4d40f8aef4c59ecc4d80fe64ad,PodSandboxId:ef507fb570b82d812f74134bab74d57a9d64e8d03c785d6110c30f4e749002ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444591936782447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ngbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
85a72f9-bb81-4f35-97ec-585c80194c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6764b19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aeb16f6338f306b68a150304056969afbb7d0e130b07af8c684fdc7f6ecbe7b,PodSandboxId:9416359048f14476fd5234b4e5febc2c799ebe7972fc02bcfd4f622978d7df3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:
1710444591423864273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzhsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac20e54-9d37-4f3b-a71a-e92c03f806d8,},Annotations:map[string]string{io.kubernetes.container.hash: 95874fd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8482deec524df7fda52136063ae25badc395936701cdb893f30a725340538525,PodSandboxId:a9f7e3967198d453dd990cd3d1ada572b394948fe26adafee07216cb426ef5d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444572230391610,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e53c0267e65fbc4272161062157c5,},Annotations:map[string]string{io.kubernetes.container.hash: 7ef207eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78ee2c608f479e9e330834dd9e001ff560a217bcba64234cf10afc4127dd944,PodSandboxId:53276463df34470726116efabfa927cb36781bd77bd0c17a04d09fba5a6e6888,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444572128919308,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594604683b15223e7fbb56240aa2e26c003c5d84c8ea70cdab7f26a230880e19,PodSandboxId:9e3bc57ca56fe559b7b2fb0b2a72f7013276bb329eb2a1cf22c32d9b909987b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444572123630291,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4a93aa8b6b303c7fe944e4adfcf7f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8e561bab4cec6ce9ca188e60d2d8ab908139face20a893beba3ba24fcfe27f,PodSandboxId:d980318214a5bc7496f7e252be6bf4d97e7f8c3c354ccb5ea83fd920961a3705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444572036609972,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96dbdba166be5bd3f2714cba5cf85166,},Annotations:map[string]string{io.kubernetes.container.hash: da291ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=04b1c64d-ecd8-4a92-8966-43ec4219f334 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.965504776Z" level=debug msg="Request: &ListPodSandboxRequest{Filter:&PodSandboxFilter{Id:,State:&PodSandboxStateValue{State:SANDBOX_READY,},LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=b6c0c0c4-c106-4440-a199-0e4d830ae964 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.965761900Z" level=debug msg="Response: &ListPodSandboxResponse{Items:[]*PodSandbox{&PodSandbox{Id:590e1e020909544e80888385a3a6e9a4a65ab091fb2e3fbf7817d108c5440b46,Metadata:&PodSandboxMetadata{Name:storage-provisioner,Uid:3f65c725-e834-45db-a417-fd47b421c883,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444594281240880,Labels:map[string]string{addonmanager.kubernetes.io/mode: Reconcile,integration-test: storage-provisioner,io.kubernetes.container.name: POD,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f65c725-e834-45db-a417-fd47b421c883,},Annotations:map[string]string{kubectl.kubernetes.io/last-applied-configuration: {\"apiVersion\":\"v1\",\"kind\":\"Pod\",\"metadata\":{\"annotations\":{},\"labels\":{\"addonmanager.kubernetes.io/mode\":\"Reconcile\",\"integration-test\":\"storage-provisioner\"},\"name\":\"storage-provisioner\",\"namespace\":\"kube
-system\"},\"spec\":{\"containers\":[{\"command\":[\"/storage-provisioner\"],\"image\":\"gcr.io/k8s-minikube/storage-provisioner:v5\",\"imagePullPolicy\":\"IfNotPresent\",\"name\":\"storage-provisioner\",\"volumeMounts\":[{\"mountPath\":\"/tmp\",\"name\":\"tmp\"}]}],\"hostNetwork\":true,\"serviceAccountName\":\"storage-provisioner\",\"volumes\":[{\"hostPath\":{\"path\":\"/tmp\",\"type\":\"Directory\"},\"name\":\"tmp\"}]}}\n,kubernetes.io/config.seen: 2024-03-14T19:29:53.956684643Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:c94cd013726278cb7285256c5fd2651988d32525e5554402e14780c84e379496,Metadata:&PodSandboxMetadata{Name:metrics-server-57f55c9bc5-kr2n6,Uid:8ef90636-238c-4334-861a-e40c758d012b,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444594156595433,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: metrics-server-57f55c9bc5-kr2n6,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 8ef90636-238c-4334-861a-e40c758d012
b,k8s-app: metrics-server,pod-template-hash: 57f55c9bc5,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:29:53.843040820Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:ef507fb570b82d812f74134bab74d57a9d64e8d03c785d6110c30f4e749002ca,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-ngbmj,Uid:a85a72f9-bb81-4f35-97ec-585c80194c1c,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444591420870296,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-ngbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a85a72f9-bb81-4f35-97ec-585c80194c1c,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:29:51.110173291Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:fccdedb983d8afb40fe6a2baa347240db88df69d64ef19adec025bf96d2ca5a9,Metadata:&PodSandboxMetadata{Name:coredns-5dd5756b68-tn7lt,Uid:bf62479b-d5f9-4020
-950d-8f3d71e952fa,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444591395037985,Labels:map[string]string{io.kubernetes.container.name: POD,io.kubernetes.pod.name: coredns-5dd5756b68-tn7lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf62479b-d5f9-4020-950d-8f3d71e952fa,k8s-app: kube-dns,pod-template-hash: 5dd5756b68,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:29:51.083959480Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:9416359048f14476fd5234b4e5febc2c799ebe7972fc02bcfd4f622978d7df3a,Metadata:&PodSandboxMetadata{Name:kube-proxy-hzhsp,Uid:cac20e54-9d37-4f3b-a71a-e92c03f806d8,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444591277624957,Labels:map[string]string{controller-revision-hash: 8486c7d9cd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-proxy-hzhsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac20e54-9d37-4f3b-a71a-e92c03f806d8,k8s-app: kube-proxy,pod-tem
plate-generation: 1,},Annotations:map[string]string{kubernetes.io/config.seen: 2024-03-14T19:29:50.969883241Z,kubernetes.io/config.source: api,},RuntimeHandler:,},&PodSandbox{Id:53276463df34470726116efabfa927cb36781bd77bd0c17a04d09fba5a6e6888,Metadata:&PodSandboxMetadata{Name:kube-controller-manager-embed-certs-992669,Uid:4f03b52fd3fe2a7fdbaa67f2d69d71ad,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444571855594216,Labels:map[string]string{component: kube-controller-manager,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-controller-manager-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,kubernetes.io/config.seen: 2024-03-14T19:29:31.377545685Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:9e3bc57ca56fe559b7b2fb0b2a72f7013276bb329eb2a1cf22c32d9b909987b1,Metadata:&PodSandboxM
etadata{Name:kube-scheduler-embed-certs-992669,Uid:fa4a93aa8b6b303c7fe944e4adfcf7f1,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444571850399850,Labels:map[string]string{component: kube-scheduler,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-scheduler-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4a93aa8b6b303c7fe944e4adfcf7f1,tier: control-plane,},Annotations:map[string]string{kubernetes.io/config.hash: fa4a93aa8b6b303c7fe944e4adfcf7f1,kubernetes.io/config.seen: 2024-03-14T19:29:31.377546807Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:d980318214a5bc7496f7e252be6bf4d97e7f8c3c354ccb5ea83fd920961a3705,Metadata:&PodSandboxMetadata{Name:kube-apiserver-embed-certs-992669,Uid:96dbdba166be5bd3f2714cba5cf85166,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444571848642851,Labels:map[string]string{component: kube-apiserver,io.kubernetes.container.name: POD,io.kubernetes.pod.name: kube-apiserver
-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96dbdba166be5bd3f2714cba5cf85166,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/kube-apiserver.advertise-address.endpoint: 192.168.50.213:8443,kubernetes.io/config.hash: 96dbdba166be5bd3f2714cba5cf85166,kubernetes.io/config.seen: 2024-03-14T19:29:31.377544256Z,kubernetes.io/config.source: file,},RuntimeHandler:,},&PodSandbox{Id:a9f7e3967198d453dd990cd3d1ada572b394948fe26adafee07216cb426ef5d2,Metadata:&PodSandboxMetadata{Name:etcd-embed-certs-992669,Uid:7d6e53c0267e65fbc4272161062157c5,Namespace:kube-system,Attempt:0,},State:SANDBOX_READY,CreatedAt:1710444571847880027,Labels:map[string]string{component: etcd,io.kubernetes.container.name: POD,io.kubernetes.pod.name: etcd-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e53c0267e65fbc4272161062157c5,tier: control-plane,},Annotations:map[string]string{kubeadm.kubernetes.io/etcd.advertise-client-urls: https://192.168.5
0.213:2379,kubernetes.io/config.hash: 7d6e53c0267e65fbc4272161062157c5,kubernetes.io/config.seen: 2024-03-14T19:29:31.377539457Z,kubernetes.io/config.source: file,},RuntimeHandler:,},},}" file="otel-collector/interceptors.go:74" id=b6c0c0c4-c106-4440-a199-0e4d830ae964 name=/runtime.v1.RuntimeService/ListPodSandbox
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.967173815Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:&ContainerStateValue{State:CONTAINER_RUNNING,},PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=50fb7ced-ddb1-4765-8c52-4c3af567752d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.967554207Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=50fb7ced-ddb1-4765-8c52-4c3af567752d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.967753822Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e468421ed202cd300bb4d6659e1cd2405ae7fc04ad047d6df03203c2c35779,PodSandboxId:590e1e020909544e80888385a3a6e9a4a65ab091fb2e3fbf7817d108c5440b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444594414417209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f65c725-e834-45db-a417-fd47b421c883,},Annotations:map[string]string{io.kubernetes.container.hash: e977df94,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9523d7c5c9a7abfa65d76f957050e677a374c5ed0c5a7440a11cc619618619ea,PodSandboxId:fccdedb983d8afb40fe6a2baa347240db88df69d64ef19adec025bf96d2ca5a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444592078861422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tn7lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf62479b-d5f9-4020-950d-8f3d71e952fa,},Annotations:map[string]string{io.kubernetes.container.hash: 336fe2df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49df2538e8babc977ed31e995dbc173abd54f4d40f8aef4c59ecc4d80fe64ad,PodSandboxId:ef507fb570b82d812f74134bab74d57a9d64e8d03c785d6110c30f4e749002ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444591936782447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ngbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
85a72f9-bb81-4f35-97ec-585c80194c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6764b19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aeb16f6338f306b68a150304056969afbb7d0e130b07af8c684fdc7f6ecbe7b,PodSandboxId:9416359048f14476fd5234b4e5febc2c799ebe7972fc02bcfd4f622978d7df3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:
1710444591423864273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzhsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac20e54-9d37-4f3b-a71a-e92c03f806d8,},Annotations:map[string]string{io.kubernetes.container.hash: 95874fd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8482deec524df7fda52136063ae25badc395936701cdb893f30a725340538525,PodSandboxId:a9f7e3967198d453dd990cd3d1ada572b394948fe26adafee07216cb426ef5d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444572230391610,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e53c0267e65fbc4272161062157c5,},Annotations:map[string]string{io.kubernetes.container.hash: 7ef207eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78ee2c608f479e9e330834dd9e001ff560a217bcba64234cf10afc4127dd944,PodSandboxId:53276463df34470726116efabfa927cb36781bd77bd0c17a04d09fba5a6e6888,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444572128919308,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594604683b15223e7fbb56240aa2e26c003c5d84c8ea70cdab7f26a230880e19,PodSandboxId:9e3bc57ca56fe559b7b2fb0b2a72f7013276bb329eb2a1cf22c32d9b909987b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444572123630291,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4a93aa8b6b303c7fe944e4adfcf7f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8e561bab4cec6ce9ca188e60d2d8ab908139face20a893beba3ba24fcfe27f,PodSandboxId:d980318214a5bc7496f7e252be6bf4d97e7f8c3c354ccb5ea83fd920961a3705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444572036609972,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96dbdba166be5bd3f2714cba5cf85166,},Annotations:map[string]string{io.kubernetes.container.hash: da291ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=50fb7ced-ddb1-4765-8c52-4c3af567752d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.972599167Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=d0066375-1749-40a1-9822-9164b2f280fb name=/runtime.v1.RuntimeService/Version
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.972662208Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=d0066375-1749-40a1-9822-9164b2f280fb name=/runtime.v1.RuntimeService/Version
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.975422751Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=9fbe95fd-e384-41fd-ac75-98bc8f044d01 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.975922835Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445534975893089,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=9fbe95fd-e384-41fd-ac75-98bc8f044d01 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.976638521Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0c7b81e0-ddf7-4e24-9d9a-60160a50b24b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.976714464Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0c7b81e0-ddf7-4e24-9d9a-60160a50b24b name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:45:34 embed-certs-992669 crio[696]: time="2024-03-14 19:45:34.976904287Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:86e468421ed202cd300bb4d6659e1cd2405ae7fc04ad047d6df03203c2c35779,PodSandboxId:590e1e020909544e80888385a3a6e9a4a65ab091fb2e3fbf7817d108c5440b46,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444594414417209,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3f65c725-e834-45db-a417-fd47b421c883,},Annotations:map[string]string{io.kubernetes.container.hash: e977df94,io.kubernetes.container.restartCount: 0,io.kubernetes.con
tainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9523d7c5c9a7abfa65d76f957050e677a374c5ed0c5a7440a11cc619618619ea,PodSandboxId:fccdedb983d8afb40fe6a2baa347240db88df69d64ef19adec025bf96d2ca5a9,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444592078861422,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-tn7lt,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: bf62479b-d5f9-4020-950d-8f3d71e952fa,},Annotations:map[string]string{io.kubernetes.container.hash: 336fe2df,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UD
P\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a49df2538e8babc977ed31e995dbc173abd54f4d40f8aef4c59ecc4d80fe64ad,PodSandboxId:ef507fb570b82d812f74134bab74d57a9d64e8d03c785d6110c30f4e749002ca,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444591936782447,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-ngbmj,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a
85a72f9-bb81-4f35-97ec-585c80194c1c,},Annotations:map[string]string{io.kubernetes.container.hash: 6764b19,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:7aeb16f6338f306b68a150304056969afbb7d0e130b07af8c684fdc7f6ecbe7b,PodSandboxId:9416359048f14476fd5234b4e5febc2c799ebe7972fc02bcfd4f622978d7df3a,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING,CreatedAt:
1710444591423864273,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-hzhsp,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: cac20e54-9d37-4f3b-a71a-e92c03f806d8,},Annotations:map[string]string{io.kubernetes.container.hash: 95874fd4,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:8482deec524df7fda52136063ae25badc395936701cdb893f30a725340538525,PodSandboxId:a9f7e3967198d453dd990cd3d1ada572b394948fe26adafee07216cb426ef5d2,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444572230391610,Labels:map[string]s
tring{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 7d6e53c0267e65fbc4272161062157c5,},Annotations:map[string]string{io.kubernetes.container.hash: 7ef207eb,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:d78ee2c608f479e9e330834dd9e001ff560a217bcba64234cf10afc4127dd944,PodSandboxId:53276463df34470726116efabfa927cb36781bd77bd0c17a04d09fba5a6e6888,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444572128919308,Labels:map[string]string{io.kubernetes.con
tainer.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4f03b52fd3fe2a7fdbaa67f2d69d71ad,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:594604683b15223e7fbb56240aa2e26c003c5d84c8ea70cdab7f26a230880e19,PodSandboxId:9e3bc57ca56fe559b7b2fb0b2a72f7013276bb329eb2a1cf22c32d9b909987b1,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444572123630291,Labels:map[string]string{io.kubernet
es.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: fa4a93aa8b6b303c7fe944e4adfcf7f1,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:6b8e561bab4cec6ce9ca188e60d2d8ab908139face20a893beba3ba24fcfe27f,PodSandboxId:d980318214a5bc7496f7e252be6bf4d97e7f8c3c354ccb5ea83fd920961a3705,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444572036609972,Labels:map[string]string{io.kubernetes.container
.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-embed-certs-992669,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 96dbdba166be5bd3f2714cba5cf85166,},Annotations:map[string]string{io.kubernetes.container.hash: da291ed2,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0c7b81e0-ddf7-4e24-9d9a-60160a50b24b name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	86e468421ed20       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   15 minutes ago      Running             storage-provisioner       0                   590e1e0209095       storage-provisioner
	9523d7c5c9a7a       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   fccdedb983d8a       coredns-5dd5756b68-tn7lt
	a49df2538e8ba       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   15 minutes ago      Running             coredns                   0                   ef507fb570b82       coredns-5dd5756b68-ngbmj
	7aeb16f6338f3       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   15 minutes ago      Running             kube-proxy                0                   9416359048f14       kube-proxy-hzhsp
	8482deec524df       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   a9f7e3967198d       etcd-embed-certs-992669
	d78ee2c608f47       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   53276463df344       kube-controller-manager-embed-certs-992669
	594604683b152       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   9e3bc57ca56fe       kube-scheduler-embed-certs-992669
	6b8e561bab4ce       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   d980318214a5b       kube-apiserver-embed-certs-992669
	
	
	==> coredns [9523d7c5c9a7abfa65d76f957050e677a374c5ed0c5a7440a11cc619618619ea] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [a49df2538e8babc977ed31e995dbc173abd54f4d40f8aef4c59ecc4d80fe64ad] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               embed-certs-992669
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=embed-certs-992669
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=embed-certs-992669
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T19_29_38_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:29:35 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  embed-certs-992669
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:45:26 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:45:19 +0000   Thu, 14 Mar 2024 19:29:32 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:45:19 +0000   Thu, 14 Mar 2024 19:29:32 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:45:19 +0000   Thu, 14 Mar 2024 19:29:32 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:45:19 +0000   Thu, 14 Mar 2024 19:29:39 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.50.213
	  Hostname:    embed-certs-992669
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 56960399f5484e4ba79a009d8c003f55
	  System UUID:                56960399-f548-4e4b-a79a-009d8c003f55
	  Boot ID:                    285ed688-7e70-459d-973c-72febf3ebd7a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                          CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                          ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-ngbmj                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 coredns-5dd5756b68-tn7lt                      100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     15m
	  kube-system                 etcd-embed-certs-992669                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 kube-apiserver-embed-certs-992669             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-controller-manager-embed-certs-992669    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-proxy-hzhsp                              0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 kube-scheduler-embed-certs-992669             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	  kube-system                 metrics-server-57f55c9bc5-kr2n6               100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         15m
	  kube-system                 storage-provisioner                           0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         15m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age   From             Message
	  ----    ------                   ----  ----             -------
	  Normal  Starting                 15m   kube-proxy       
	  Normal  Starting                 15m   kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  15m   kubelet          Node embed-certs-992669 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    15m   kubelet          Node embed-certs-992669 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     15m   kubelet          Node embed-certs-992669 status is now: NodeHasSufficientPID
	  Normal  NodeNotReady             15m   kubelet          Node embed-certs-992669 status is now: NodeNotReady
	  Normal  NodeAllocatableEnforced  15m   kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeReady                15m   kubelet          Node embed-certs-992669 status is now: NodeReady
	  Normal  RegisteredNode           15m   node-controller  Node embed-certs-992669 event: Registered Node embed-certs-992669 in Controller
	
	
	==> dmesg <==
	[  +0.053279] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.042657] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.561111] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.444900] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.752980] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000011] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +7.673703] systemd-fstab-generator[617]: Ignoring "noauto" option for root device
	[  +0.066816] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.066013] systemd-fstab-generator[629]: Ignoring "noauto" option for root device
	[  +0.183521] systemd-fstab-generator[643]: Ignoring "noauto" option for root device
	[  +0.145596] systemd-fstab-generator[655]: Ignoring "noauto" option for root device
	[  +0.266229] systemd-fstab-generator[681]: Ignoring "noauto" option for root device
	[  +5.434659] systemd-fstab-generator[776]: Ignoring "noauto" option for root device
	[  +0.064559] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.906116] systemd-fstab-generator[899]: Ignoring "noauto" option for root device
	[  +5.633473] kauditd_printk_skb: 97 callbacks suppressed
	[  +7.870543] kauditd_printk_skb: 74 callbacks suppressed
	[Mar14 19:29] kauditd_printk_skb: 3 callbacks suppressed
	[  +2.035177] systemd-fstab-generator[3402]: Ignoring "noauto" option for root device
	[  +4.695065] kauditd_printk_skb: 57 callbacks suppressed
	[  +3.097420] systemd-fstab-generator[3722]: Ignoring "noauto" option for root device
	[ +12.785966] kauditd_printk_skb: 14 callbacks suppressed
	[  +0.318935] systemd-fstab-generator[4056]: Ignoring "noauto" option for root device
	[Mar14 19:30] kauditd_printk_skb: 82 callbacks suppressed
	
	
	==> etcd [8482deec524df7fda52136063ae25badc395936701cdb893f30a725340538525] <==
	{"level":"info","ts":"2024-03-14T19:29:32.726896Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"64fdbb8e23141dc5","local-member-id":"afd31c34526e5864","added-peer-id":"afd31c34526e5864","added-peer-peer-urls":["https://192.168.50.213:2380"]}
	{"level":"info","ts":"2024-03-14T19:29:33.353389Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 is starting a new election at term 1"}
	{"level":"info","ts":"2024-03-14T19:29:33.353582Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-14T19:29:33.353671Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 received MsgPreVoteResp from afd31c34526e5864 at term 1"}
	{"level":"info","ts":"2024-03-14T19:29:33.353735Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became candidate at term 2"}
	{"level":"info","ts":"2024-03-14T19:29:33.353763Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 received MsgVoteResp from afd31c34526e5864 at term 2"}
	{"level":"info","ts":"2024-03-14T19:29:33.353886Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"afd31c34526e5864 became leader at term 2"}
	{"level":"info","ts":"2024-03-14T19:29:33.353915Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: afd31c34526e5864 elected leader afd31c34526e5864 at term 2"}
	{"level":"info","ts":"2024-03-14T19:29:33.358896Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:29:33.363123Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"afd31c34526e5864","local-member-attributes":"{Name:embed-certs-992669 ClientURLs:[https://192.168.50.213:2379]}","request-path":"/0/members/afd31c34526e5864/attributes","cluster-id":"64fdbb8e23141dc5","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T19:29:33.36357Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"64fdbb8e23141dc5","local-member-id":"afd31c34526e5864","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:29:33.364638Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:29:33.376393Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:29:33.381391Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:29:33.381471Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:29:33.37933Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T19:29:33.383389Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T19:29:33.381364Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.50.213:2379"}
	{"level":"info","ts":"2024-03-14T19:29:33.396155Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T19:39:33.449142Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":677}
	{"level":"info","ts":"2024-03-14T19:39:33.453577Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":677,"took":"3.833128ms","hash":4032288911}
	{"level":"info","ts":"2024-03-14T19:39:33.45387Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":4032288911,"revision":677,"compact-revision":-1}
	{"level":"info","ts":"2024-03-14T19:44:33.492594Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":920}
	{"level":"info","ts":"2024-03-14T19:44:33.496045Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":920,"took":"2.923651ms","hash":2024313435}
	{"level":"info","ts":"2024-03-14T19:44:33.496093Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2024313435,"revision":920,"compact-revision":677}
	
	
	==> kernel <==
	 19:45:35 up 21 min,  0 users,  load average: 0.60, 0.28, 0.15
	Linux embed-certs-992669 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [6b8e561bab4cec6ce9ca188e60d2d8ab908139face20a893beba3ba24fcfe27f] <==
	E0314 19:40:36.217471       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:40:36.217502       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 19:41:35.113450       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 19:42:35.112993       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 19:42:36.215670       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:42:36.215867       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:42:36.215930       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:42:36.217920       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:42:36.217952       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:42:36.217959       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 19:43:35.112880       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 19:44:35.112990       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 19:44:35.221262       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:44:35.221476       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:44:35.221784       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 19:44:36.222387       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:44:36.222460       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:44:36.222471       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:44:36.222554       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:44:36.222810       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:44:36.224074       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 19:45:35.113145       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	
	
	==> kube-controller-manager [d78ee2c608f479e9e330834dd9e001ff560a217bcba64234cf10afc4127dd944] <==
	I0314 19:39:51.600445       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:40:21.108225       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:40:21.610822       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:40:51.114408       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:40:51.619586       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 19:41:03.980526       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="420.286µs"
	I0314 19:41:15.989752       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="157.926µs"
	E0314 19:41:21.121011       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:41:21.628504       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:41:51.127547       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:41:51.638111       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:42:21.134540       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:42:21.648225       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:42:51.141175       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:42:51.657614       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:43:21.146710       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:43:21.669018       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:43:51.152558       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:43:51.678221       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:44:21.158904       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:44:21.688087       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:44:51.167045       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:44:51.701833       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:45:21.174159       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:45:21.719183       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [7aeb16f6338f306b68a150304056969afbb7d0e130b07af8c684fdc7f6ecbe7b] <==
	I0314 19:29:52.039986       1 server_others.go:69] "Using iptables proxy"
	I0314 19:29:52.069127       1 node.go:141] Successfully retrieved node IP: 192.168.50.213
	I0314 19:29:52.163005       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:29:52.163029       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:29:52.215755       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:29:52.215832       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:29:52.215997       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:29:52.216006       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:29:52.223194       1 config.go:188] "Starting service config controller"
	I0314 19:29:52.227518       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:29:52.227631       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:29:52.227643       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:29:52.235701       1 config.go:315] "Starting node config controller"
	I0314 19:29:52.235712       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:29:52.331541       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:29:52.427746       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:29:52.435843       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [594604683b15223e7fbb56240aa2e26c003c5d84c8ea70cdab7f26a230880e19] <==
	W0314 19:29:35.323739       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 19:29:35.324025       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 19:29:35.324160       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 19:29:35.324197       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 19:29:36.123605       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 19:29:36.123773       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 19:29:36.138643       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 19:29:36.138710       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 19:29:36.153611       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	E0314 19:29:36.153954       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StorageClass: failed to list *v1.StorageClass: storageclasses.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "storageclasses" in API group "storage.k8s.io" at the cluster scope
	W0314 19:29:36.178046       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	E0314 19:29:36.178126       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Namespace: failed to list *v1.Namespace: namespaces is forbidden: User "system:kube-scheduler" cannot list resource "namespaces" in API group "" at the cluster scope
	W0314 19:29:36.236716       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 19:29:36.236935       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 19:29:36.255427       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 19:29:36.255483       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 19:29:36.258435       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 19:29:36.258500       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 19:29:36.314951       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	E0314 19:29:36.315009       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolumeClaim: failed to list *v1.PersistentVolumeClaim: persistentvolumeclaims is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumeclaims" in API group "" at the cluster scope
	W0314 19:29:36.330327       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	E0314 19:29:36.330503       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Pod: failed to list *v1.Pod: pods is forbidden: User "system:kube-scheduler" cannot list resource "pods" in API group "" at the cluster scope
	W0314 19:29:36.393740       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 19:29:36.393789       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	I0314 19:29:39.289880       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 19:42:39 embed-certs-992669 kubelet[3729]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:42:39 embed-certs-992669 kubelet[3729]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:42:49 embed-certs-992669 kubelet[3729]: E0314 19:42:49.966019    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:43:04 embed-certs-992669 kubelet[3729]: E0314 19:43:04.965241    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:43:19 embed-certs-992669 kubelet[3729]: E0314 19:43:19.965175    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:43:34 embed-certs-992669 kubelet[3729]: E0314 19:43:34.965430    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:43:39 embed-certs-992669 kubelet[3729]: E0314 19:43:39.100190    3729 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:43:39 embed-certs-992669 kubelet[3729]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:43:39 embed-certs-992669 kubelet[3729]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:43:39 embed-certs-992669 kubelet[3729]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:43:39 embed-certs-992669 kubelet[3729]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:43:45 embed-certs-992669 kubelet[3729]: E0314 19:43:45.965561    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:44:00 embed-certs-992669 kubelet[3729]: E0314 19:44:00.966439    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:44:12 embed-certs-992669 kubelet[3729]: E0314 19:44:12.965171    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:44:27 embed-certs-992669 kubelet[3729]: E0314 19:44:27.966148    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:44:38 embed-certs-992669 kubelet[3729]: E0314 19:44:38.968189    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:44:39 embed-certs-992669 kubelet[3729]: E0314 19:44:39.101524    3729 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:44:39 embed-certs-992669 kubelet[3729]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:44:39 embed-certs-992669 kubelet[3729]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:44:39 embed-certs-992669 kubelet[3729]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:44:39 embed-certs-992669 kubelet[3729]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:44:49 embed-certs-992669 kubelet[3729]: E0314 19:44:49.967376    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:45:00 embed-certs-992669 kubelet[3729]: E0314 19:45:00.965876    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:45:12 embed-certs-992669 kubelet[3729]: E0314 19:45:12.966386    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	Mar 14 19:45:27 embed-certs-992669 kubelet[3729]: E0314 19:45:27.965595    3729 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-kr2n6" podUID="8ef90636-238c-4334-861a-e40c758d012b"
	
	
	==> storage-provisioner [86e468421ed202cd300bb4d6659e1cd2405ae7fc04ad047d6df03203c2c35779] <==
	I0314 19:29:54.534206       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 19:29:54.547814       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 19:29:54.547972       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 19:29:54.560652       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 19:29:54.560918       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_embed-certs-992669_e18a50a3-da21-43bf-893f-e064e73539dd!
	I0314 19:29:54.562209       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"6d97ee26-9a5f-4277-9344-1b74430be063", APIVersion:"v1", ResourceVersion:"418", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' embed-certs-992669_e18a50a3-da21-43bf-893f-e064e73539dd became leader
	I0314 19:29:54.662018       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_embed-certs-992669_e18a50a3-da21-43bf-893f-e064e73539dd!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p embed-certs-992669 -n embed-certs-992669
helpers_test.go:261: (dbg) Run:  kubectl --context embed-certs-992669 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-kr2n6
helpers_test.go:274: ======> post-mortem[TestStartStop/group/embed-certs/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context embed-certs-992669 describe pod metrics-server-57f55c9bc5-kr2n6
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context embed-certs-992669 describe pod metrics-server-57f55c9bc5-kr2n6: exit status 1 (76.904157ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-kr2n6" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context embed-certs-992669 describe pod metrics-server-57f55c9bc5-kr2n6: exit status 1
--- FAIL: TestStartStop/group/embed-certs/serial/AddonExistsAfterStop (396.45s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/AddonExistsAfterStop (286.49s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/no-preload/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/no-preload/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-731976 -n no-preload-731976
start_stop_delete_test.go:287: TestStartStop/group/no-preload/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-14 19:44:18.158357531 +0000 UTC m=+5984.644216409
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context no-preload-731976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context no-preload-731976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.707µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context no-preload-731976 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-731976 -n no-preload-731976
helpers_test.go:244: <<< TestStartStop/group/no-preload/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p no-preload-731976 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p no-preload-731976 logs -n 25: (1.310018102s)
helpers_test.go:252: TestStartStop/group/no-preload/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-097195                           | kubernetes-upgrade-097195    | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:15 UTC |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-993602 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | disable-driver-mounts-993602                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:18 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-731976             | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-992669            | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC | 14 Mar 24 19:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-440341  | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC | 14 Mar 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-968094        | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-731976                  | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-992669                 | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-968094             | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-440341       | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:21 UTC | 14 Mar 24 19:30 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| delete  | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:43 UTC | 14 Mar 24 19:43 UTC |
	| start   | -p newest-cni-549136 --memory=2200 --alsologtostderr   | newest-cni-549136            | jenkins | v1.32.0 | 14 Mar 24 19:43 UTC |                     |
	|         | --wait=apiserver,system_pods,default_sa                |                              |         |         |                     |                     |
	|         | --feature-gates ServerSideApply=true                   |                              |         |         |                     |                     |
	|         | --network-plugin=cni                                   |                              |         |         |                     |                     |
	|         | --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16   |                              |         |         |                     |                     |
	|         | --driver=kvm2  --container-runtime=crio                |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:43:58
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:43:58.733684  997181 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:43:58.733960  997181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:43:58.733970  997181 out.go:304] Setting ErrFile to fd 2...
	I0314 19:43:58.733974  997181 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:43:58.734209  997181 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:43:58.734900  997181 out.go:298] Setting JSON to false
	I0314 19:43:58.735931  997181 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":98791,"bootTime":1710346648,"procs":205,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:43:58.735993  997181 start.go:139] virtualization: kvm guest
	I0314 19:43:58.738693  997181 out.go:177] * [newest-cni-549136] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:43:58.740714  997181 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:43:58.740711  997181 notify.go:220] Checking for updates...
	I0314 19:43:58.742032  997181 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:43:58.743451  997181 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:43:58.744834  997181 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:43:58.746021  997181 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:43:58.747113  997181 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:43:58.748767  997181 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:43:58.748933  997181 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:43:58.749036  997181 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:43:58.749157  997181 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:43:58.787578  997181 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 19:43:58.788803  997181 start.go:297] selected driver: kvm2
	I0314 19:43:58.788818  997181 start.go:901] validating driver "kvm2" against <nil>
	I0314 19:43:58.788830  997181 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:43:58.789616  997181 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:43:58.789684  997181 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:43:58.805091  997181 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:43:58.805149  997181 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	W0314 19:43:58.805193  997181 out.go:239] ! With --network-plugin=cni, you will need to provide your own CNI. See --cni flag as a user-friendly alternative
	I0314 19:43:58.805485  997181 start_flags.go:966] Waiting for components: map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true]
	I0314 19:43:58.805527  997181 cni.go:84] Creating CNI manager for ""
	I0314 19:43:58.805538  997181 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:43:58.805553  997181 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 19:43:58.805634  997181 start.go:340] cluster config:
	{Name:newest-cni-549136 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-549136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Conta
inerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetri
cs:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:43:58.805810  997181 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:43:58.808692  997181 out.go:177] * Starting "newest-cni-549136" primary control-plane node in "newest-cni-549136" cluster
	I0314 19:43:58.810014  997181 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 19:43:58.810051  997181 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0314 19:43:58.810058  997181 cache.go:56] Caching tarball of preloaded images
	I0314 19:43:58.810169  997181 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:43:58.810186  997181 cache.go:59] Finished verifying existence of preloaded tar for v1.29.0-rc.2 on crio
	I0314 19:43:58.810303  997181 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/newest-cni-549136/config.json ...
	I0314 19:43:58.810328  997181 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/newest-cni-549136/config.json: {Name:mk7b127331927a21eefccbabb4ecfb93be050984 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:43:58.810502  997181 start.go:360] acquireMachinesLock for newest-cni-549136: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:43:58.810540  997181 start.go:364] duration metric: took 20.251µs to acquireMachinesLock for "newest-cni-549136"
	I0314 19:43:58.810561  997181 start.go:93] Provisioning new machine with config: &{Name:newest-cni-549136 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{K
ubernetesVersion:v1.29.0-rc.2 ClusterName:newest-cni-549136 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates:ServerSideApply=true ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:kubeadm Key:pod-network-cidr Value:10.42.0.0/16}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:false default_sa:true extra:false kubelet:false node_ready:false system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-
host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:43:58.810635  997181 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 19:43:58.812395  997181 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=2200MB, Disk=20000MB) ...
	I0314 19:43:58.812551  997181 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:43:58.812593  997181 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:43:58.828277  997181 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42433
	I0314 19:43:58.829155  997181 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:43:58.829886  997181 main.go:141] libmachine: Using API Version  1
	I0314 19:43:58.829902  997181 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:43:58.830550  997181 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:43:58.830824  997181 main.go:141] libmachine: (newest-cni-549136) Calling .GetMachineName
	I0314 19:43:58.831069  997181 main.go:141] libmachine: (newest-cni-549136) Calling .DriverName
	I0314 19:43:58.831286  997181 start.go:159] libmachine.API.Create for "newest-cni-549136" (driver="kvm2")
	I0314 19:43:58.831308  997181 client.go:168] LocalClient.Create starting
	I0314 19:43:58.831371  997181 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 19:43:58.831406  997181 main.go:141] libmachine: Decoding PEM data...
	I0314 19:43:58.831421  997181 main.go:141] libmachine: Parsing certificate...
	I0314 19:43:58.831480  997181 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 19:43:58.831499  997181 main.go:141] libmachine: Decoding PEM data...
	I0314 19:43:58.831510  997181 main.go:141] libmachine: Parsing certificate...
	I0314 19:43:58.831524  997181 main.go:141] libmachine: Running pre-create checks...
	I0314 19:43:58.831530  997181 main.go:141] libmachine: (newest-cni-549136) Calling .PreCreateCheck
	I0314 19:43:58.831901  997181 main.go:141] libmachine: (newest-cni-549136) Calling .GetConfigRaw
	I0314 19:43:58.832329  997181 main.go:141] libmachine: Creating machine...
	I0314 19:43:58.832344  997181 main.go:141] libmachine: (newest-cni-549136) Calling .Create
	I0314 19:43:58.832482  997181 main.go:141] libmachine: (newest-cni-549136) Creating KVM machine...
	I0314 19:43:58.833755  997181 main.go:141] libmachine: (newest-cni-549136) DBG | found existing default KVM network
	I0314 19:43:58.835160  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:43:58.835000  997204 network.go:211] skipping subnet 192.168.39.0/24 that is taken: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName:virbr1 IfaceIPv4:192.168.39.1 IfaceMTU:1500 IfaceMAC:52:54:00:82:01:3c} reservation:<nil>}
	I0314 19:43:58.836033  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:43:58.835960  997204 network.go:211] skipping subnet 192.168.50.0/24 that is taken: &{IP:192.168.50.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.50.0/24 Gateway:192.168.50.1 ClientMin:192.168.50.2 ClientMax:192.168.50.254 Broadcast:192.168.50.255 IsPrivate:true Interface:{IfaceName:virbr2 IfaceIPv4:192.168.50.1 IfaceMTU:1500 IfaceMAC:52:54:00:01:28:1b} reservation:<nil>}
	I0314 19:43:58.836879  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:43:58.836820  997204 network.go:211] skipping subnet 192.168.61.0/24 that is taken: &{IP:192.168.61.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.61.0/24 Gateway:192.168.61.1 ClientMin:192.168.61.2 ClientMax:192.168.61.254 Broadcast:192.168.61.255 IsPrivate:true Interface:{IfaceName:virbr3 IfaceIPv4:192.168.61.1 IfaceMTU:1500 IfaceMAC:52:54:00:c3:cf:ad} reservation:<nil>}
	I0314 19:43:58.838121  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:43:58.838038  997204 network.go:206] using free private subnet 192.168.72.0/24: &{IP:192.168.72.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.72.0/24 Gateway:192.168.72.1 ClientMin:192.168.72.2 ClientMax:192.168.72.254 Broadcast:192.168.72.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc0002c1750}
	I0314 19:43:58.838225  997181 main.go:141] libmachine: (newest-cni-549136) DBG | created network xml: 
	I0314 19:43:58.838240  997181 main.go:141] libmachine: (newest-cni-549136) DBG | <network>
	I0314 19:43:58.838251  997181 main.go:141] libmachine: (newest-cni-549136) DBG |   <name>mk-newest-cni-549136</name>
	I0314 19:43:58.838259  997181 main.go:141] libmachine: (newest-cni-549136) DBG |   <dns enable='no'/>
	I0314 19:43:58.838271  997181 main.go:141] libmachine: (newest-cni-549136) DBG |   
	I0314 19:43:58.838282  997181 main.go:141] libmachine: (newest-cni-549136) DBG |   <ip address='192.168.72.1' netmask='255.255.255.0'>
	I0314 19:43:58.838293  997181 main.go:141] libmachine: (newest-cni-549136) DBG |     <dhcp>
	I0314 19:43:58.838306  997181 main.go:141] libmachine: (newest-cni-549136) DBG |       <range start='192.168.72.2' end='192.168.72.253'/>
	I0314 19:43:58.838316  997181 main.go:141] libmachine: (newest-cni-549136) DBG |     </dhcp>
	I0314 19:43:58.838327  997181 main.go:141] libmachine: (newest-cni-549136) DBG |   </ip>
	I0314 19:43:58.838334  997181 main.go:141] libmachine: (newest-cni-549136) DBG |   
	I0314 19:43:58.838342  997181 main.go:141] libmachine: (newest-cni-549136) DBG | </network>
	I0314 19:43:58.838353  997181 main.go:141] libmachine: (newest-cni-549136) DBG | 
	I0314 19:43:58.843671  997181 main.go:141] libmachine: (newest-cni-549136) DBG | trying to create private KVM network mk-newest-cni-549136 192.168.72.0/24...
	I0314 19:43:58.924427  997181 main.go:141] libmachine: (newest-cni-549136) DBG | private KVM network mk-newest-cni-549136 192.168.72.0/24 created
	I0314 19:43:58.924468  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:43:58.924382  997204 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:43:58.924482  997181 main.go:141] libmachine: (newest-cni-549136) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/newest-cni-549136 ...
	I0314 19:43:58.924501  997181 main.go:141] libmachine: (newest-cni-549136) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 19:43:58.924592  997181 main.go:141] libmachine: (newest-cni-549136) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 19:43:59.184799  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:43:59.184646  997204 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/newest-cni-549136/id_rsa...
	I0314 19:43:59.275006  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:43:59.274881  997204 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/newest-cni-549136/newest-cni-549136.rawdisk...
	I0314 19:43:59.275043  997181 main.go:141] libmachine: (newest-cni-549136) DBG | Writing magic tar header
	I0314 19:43:59.275059  997181 main.go:141] libmachine: (newest-cni-549136) DBG | Writing SSH key tar header
	I0314 19:43:59.275194  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:43:59.275100  997204 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/newest-cni-549136 ...
	I0314 19:43:59.275276  997181 main.go:141] libmachine: (newest-cni-549136) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/newest-cni-549136
	I0314 19:43:59.275304  997181 main.go:141] libmachine: (newest-cni-549136) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/newest-cni-549136 (perms=drwx------)
	I0314 19:43:59.275342  997181 main.go:141] libmachine: (newest-cni-549136) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 19:43:59.275404  997181 main.go:141] libmachine: (newest-cni-549136) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 19:43:59.275418  997181 main.go:141] libmachine: (newest-cni-549136) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:43:59.275429  997181 main.go:141] libmachine: (newest-cni-549136) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 19:43:59.275435  997181 main.go:141] libmachine: (newest-cni-549136) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 19:43:59.275442  997181 main.go:141] libmachine: (newest-cni-549136) DBG | Checking permissions on dir: /home/jenkins
	I0314 19:43:59.275452  997181 main.go:141] libmachine: (newest-cni-549136) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 19:43:59.275460  997181 main.go:141] libmachine: (newest-cni-549136) DBG | Checking permissions on dir: /home
	I0314 19:43:59.275476  997181 main.go:141] libmachine: (newest-cni-549136) DBG | Skipping /home - not owner
	I0314 19:43:59.275491  997181 main.go:141] libmachine: (newest-cni-549136) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 19:43:59.275507  997181 main.go:141] libmachine: (newest-cni-549136) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 19:43:59.275518  997181 main.go:141] libmachine: (newest-cni-549136) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 19:43:59.275526  997181 main.go:141] libmachine: (newest-cni-549136) Creating domain...
	I0314 19:43:59.276865  997181 main.go:141] libmachine: (newest-cni-549136) define libvirt domain using xml: 
	I0314 19:43:59.276900  997181 main.go:141] libmachine: (newest-cni-549136) <domain type='kvm'>
	I0314 19:43:59.276913  997181 main.go:141] libmachine: (newest-cni-549136)   <name>newest-cni-549136</name>
	I0314 19:43:59.276926  997181 main.go:141] libmachine: (newest-cni-549136)   <memory unit='MiB'>2200</memory>
	I0314 19:43:59.276935  997181 main.go:141] libmachine: (newest-cni-549136)   <vcpu>2</vcpu>
	I0314 19:43:59.276942  997181 main.go:141] libmachine: (newest-cni-549136)   <features>
	I0314 19:43:59.276949  997181 main.go:141] libmachine: (newest-cni-549136)     <acpi/>
	I0314 19:43:59.276956  997181 main.go:141] libmachine: (newest-cni-549136)     <apic/>
	I0314 19:43:59.276961  997181 main.go:141] libmachine: (newest-cni-549136)     <pae/>
	I0314 19:43:59.276968  997181 main.go:141] libmachine: (newest-cni-549136)     
	I0314 19:43:59.276974  997181 main.go:141] libmachine: (newest-cni-549136)   </features>
	I0314 19:43:59.276981  997181 main.go:141] libmachine: (newest-cni-549136)   <cpu mode='host-passthrough'>
	I0314 19:43:59.276986  997181 main.go:141] libmachine: (newest-cni-549136)   
	I0314 19:43:59.276993  997181 main.go:141] libmachine: (newest-cni-549136)   </cpu>
	I0314 19:43:59.276998  997181 main.go:141] libmachine: (newest-cni-549136)   <os>
	I0314 19:43:59.277006  997181 main.go:141] libmachine: (newest-cni-549136)     <type>hvm</type>
	I0314 19:43:59.277014  997181 main.go:141] libmachine: (newest-cni-549136)     <boot dev='cdrom'/>
	I0314 19:43:59.277028  997181 main.go:141] libmachine: (newest-cni-549136)     <boot dev='hd'/>
	I0314 19:43:59.277041  997181 main.go:141] libmachine: (newest-cni-549136)     <bootmenu enable='no'/>
	I0314 19:43:59.277051  997181 main.go:141] libmachine: (newest-cni-549136)   </os>
	I0314 19:43:59.277059  997181 main.go:141] libmachine: (newest-cni-549136)   <devices>
	I0314 19:43:59.277071  997181 main.go:141] libmachine: (newest-cni-549136)     <disk type='file' device='cdrom'>
	I0314 19:43:59.277086  997181 main.go:141] libmachine: (newest-cni-549136)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/newest-cni-549136/boot2docker.iso'/>
	I0314 19:43:59.277102  997181 main.go:141] libmachine: (newest-cni-549136)       <target dev='hdc' bus='scsi'/>
	I0314 19:43:59.277115  997181 main.go:141] libmachine: (newest-cni-549136)       <readonly/>
	I0314 19:43:59.277126  997181 main.go:141] libmachine: (newest-cni-549136)     </disk>
	I0314 19:43:59.277138  997181 main.go:141] libmachine: (newest-cni-549136)     <disk type='file' device='disk'>
	I0314 19:43:59.277150  997181 main.go:141] libmachine: (newest-cni-549136)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 19:43:59.277167  997181 main.go:141] libmachine: (newest-cni-549136)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/newest-cni-549136/newest-cni-549136.rawdisk'/>
	I0314 19:43:59.277182  997181 main.go:141] libmachine: (newest-cni-549136)       <target dev='hda' bus='virtio'/>
	I0314 19:43:59.277193  997181 main.go:141] libmachine: (newest-cni-549136)     </disk>
	I0314 19:43:59.277202  997181 main.go:141] libmachine: (newest-cni-549136)     <interface type='network'>
	I0314 19:43:59.277215  997181 main.go:141] libmachine: (newest-cni-549136)       <source network='mk-newest-cni-549136'/>
	I0314 19:43:59.277234  997181 main.go:141] libmachine: (newest-cni-549136)       <model type='virtio'/>
	I0314 19:43:59.277249  997181 main.go:141] libmachine: (newest-cni-549136)     </interface>
	I0314 19:43:59.277263  997181 main.go:141] libmachine: (newest-cni-549136)     <interface type='network'>
	I0314 19:43:59.277276  997181 main.go:141] libmachine: (newest-cni-549136)       <source network='default'/>
	I0314 19:43:59.277284  997181 main.go:141] libmachine: (newest-cni-549136)       <model type='virtio'/>
	I0314 19:43:59.277294  997181 main.go:141] libmachine: (newest-cni-549136)     </interface>
	I0314 19:43:59.277299  997181 main.go:141] libmachine: (newest-cni-549136)     <serial type='pty'>
	I0314 19:43:59.277313  997181 main.go:141] libmachine: (newest-cni-549136)       <target port='0'/>
	I0314 19:43:59.277325  997181 main.go:141] libmachine: (newest-cni-549136)     </serial>
	I0314 19:43:59.277356  997181 main.go:141] libmachine: (newest-cni-549136)     <console type='pty'>
	I0314 19:43:59.277381  997181 main.go:141] libmachine: (newest-cni-549136)       <target type='serial' port='0'/>
	I0314 19:43:59.277393  997181 main.go:141] libmachine: (newest-cni-549136)     </console>
	I0314 19:43:59.277405  997181 main.go:141] libmachine: (newest-cni-549136)     <rng model='virtio'>
	I0314 19:43:59.277420  997181 main.go:141] libmachine: (newest-cni-549136)       <backend model='random'>/dev/random</backend>
	I0314 19:43:59.277431  997181 main.go:141] libmachine: (newest-cni-549136)     </rng>
	I0314 19:43:59.277439  997181 main.go:141] libmachine: (newest-cni-549136)     
	I0314 19:43:59.277450  997181 main.go:141] libmachine: (newest-cni-549136)     
	I0314 19:43:59.277458  997181 main.go:141] libmachine: (newest-cni-549136)   </devices>
	I0314 19:43:59.277470  997181 main.go:141] libmachine: (newest-cni-549136) </domain>
	I0314 19:43:59.277477  997181 main.go:141] libmachine: (newest-cni-549136) 
	I0314 19:43:59.282305  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:19:15 in network default
	I0314 19:43:59.283010  997181 main.go:141] libmachine: (newest-cni-549136) Ensuring networks are active...
	I0314 19:43:59.283033  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:43:59.283862  997181 main.go:141] libmachine: (newest-cni-549136) Ensuring network default is active
	I0314 19:43:59.284271  997181 main.go:141] libmachine: (newest-cni-549136) Ensuring network mk-newest-cni-549136 is active
	I0314 19:43:59.284871  997181 main.go:141] libmachine: (newest-cni-549136) Getting domain xml...
	I0314 19:43:59.285817  997181 main.go:141] libmachine: (newest-cni-549136) Creating domain...
	I0314 19:44:00.553760  997181 main.go:141] libmachine: (newest-cni-549136) Waiting to get IP...
	I0314 19:44:00.554780  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:00.555328  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:00.555411  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:00.555319  997204 retry.go:31] will retry after 308.820611ms: waiting for machine to come up
	I0314 19:44:00.865923  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:00.866524  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:00.866557  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:00.866454  997204 retry.go:31] will retry after 330.649448ms: waiting for machine to come up
	I0314 19:44:01.199096  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:01.199632  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:01.199661  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:01.199571  997204 retry.go:31] will retry after 309.411521ms: waiting for machine to come up
	I0314 19:44:01.511157  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:01.511665  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:01.511686  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:01.511604  997204 retry.go:31] will retry after 473.60087ms: waiting for machine to come up
	I0314 19:44:01.987445  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:01.987953  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:01.987986  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:01.987912  997204 retry.go:31] will retry after 542.409043ms: waiting for machine to come up
	I0314 19:44:02.531397  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:02.532035  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:02.532086  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:02.531966  997204 retry.go:31] will retry after 872.11886ms: waiting for machine to come up
	I0314 19:44:03.405346  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:03.405848  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:03.405875  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:03.405780  997204 retry.go:31] will retry after 1.174018026s: waiting for machine to come up
	I0314 19:44:04.581168  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:04.581663  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:04.581688  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:04.581612  997204 retry.go:31] will retry after 1.214221961s: waiting for machine to come up
	I0314 19:44:05.797522  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:05.798056  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:05.798081  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:05.798008  997204 retry.go:31] will retry after 1.340408342s: waiting for machine to come up
	I0314 19:44:07.140347  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:07.140859  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:07.140889  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:07.140810  997204 retry.go:31] will retry after 1.928984128s: waiting for machine to come up
	I0314 19:44:09.071925  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:09.072452  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:09.072487  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:09.072400  997204 retry.go:31] will retry after 2.339920662s: waiting for machine to come up
	I0314 19:44:11.414343  997181 main.go:141] libmachine: (newest-cni-549136) DBG | domain newest-cni-549136 has defined MAC address 52:54:00:0e:3e:7c in network mk-newest-cni-549136
	I0314 19:44:11.414853  997181 main.go:141] libmachine: (newest-cni-549136) DBG | unable to find current IP address of domain newest-cni-549136 in network mk-newest-cni-549136
	I0314 19:44:11.414884  997181 main.go:141] libmachine: (newest-cni-549136) DBG | I0314 19:44:11.414798  997204 retry.go:31] will retry after 2.740203352s: waiting for machine to come up
	
	
	==> CRI-O <==
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.839327921Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0,Verbose:false,}" file="otel-collector/interceptors.go:62" id=d531c43c-7868-4349-b0a1-aa79b59ea83c name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.839449852Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710444358542361711,StartedAt:1710444358641266592,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-controller-manager:v1.29.0-rc.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af4fb56aaf2dd025068b4aa6814d5c0,},Annotations:map[string]string{io.kubernetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.contain
er.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/4af4fb56aaf2dd025068b4aa6814d5c0/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/4af4fb56aaf2dd025068b4aa6814d5c0/containers/kube-controller-manager/6ae09fa6,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/ssl/certs,HostPath:/etc/ssl/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/controller-manager.conf,HostPath:/etc/kubernetes/controller-manager.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PR
IVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/share/ca-certificates,HostPath:/usr/share/ca-certificates,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/var/lib/minikube/certs,HostPath:/var/lib/minikube/certs,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,HostPath:/usr/libexec/kubernetes/kubelet-plugins/volume/exec,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-controller-manager-no-preload-731976_4af4fb56aaf2dd025068b4aa6814d5c0/kube-controller-manager/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{CpuPeriod:100000,CpuQuota:0,CpuShares:204,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,Cp
usetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=d531c43c-7868-4349-b0a1-aa79b59ea83c name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.839815858Z" level=debug msg="Request: &ContainerStatusRequest{ContainerId:5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55,Verbose:false,}" file="otel-collector/interceptors.go:62" id=a4bf1a51-dcb1-4f10-aec9-b19e032076ce name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.839937168Z" level=debug msg="Response: &ContainerStatusResponse{Status:&ContainerStatus{Id:5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},State:CONTAINER_RUNNING,CreatedAt:1710444358510213921,StartedAt:1710444358670850369,FinishedAt:0,ExitCode:0,Image:&ImageSpec{Image:registry.k8s.io/kube-scheduler:v1.29.0-rc.2,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Reason:,Message:,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e3e9be82e0c3e36d0567e37ffdff62,},Annotations:map[string]string{io.kubernetes.container.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/term
ination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},Mounts:[]*Mount{&Mount{ContainerPath:/etc/hosts,HostPath:/var/lib/kubelet/pods/a3e3e9be82e0c3e36d0567e37ffdff62/etc-hosts,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/dev/termination-log,HostPath:/var/lib/kubelet/pods/a3e3e9be82e0c3e36d0567e37ffdff62/containers/kube-scheduler/36d4919e,Readonly:false,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},&Mount{ContainerPath:/etc/kubernetes/scheduler.conf,HostPath:/etc/kubernetes/scheduler.conf,Readonly:true,SelinuxRelabel:false,Propagation:PROPAGATION_PRIVATE,UidMappings:[]*IDMapping{},GidMappings:[]*IDMapping{},},},LogPath:/var/log/pods/kube-system_kube-scheduler-no-preload-731976_a3e3e9be82e0c3e36d0567e37ffdff62/kube-scheduler/1.log,Resources:&ContainerResources{Linux:&LinuxContainerResources{
CpuPeriod:100000,CpuQuota:0,CpuShares:102,MemoryLimitInBytes:0,OomScoreAdj:-997,CpusetCpus:,CpusetMems:,HugepageLimits:[]*HugepageLimit{&HugepageLimit{PageSize:2MB,Limit:0,},},Unified:map[string]string{memory.oom.group: 1,memory.swap.max: 0,},MemorySwapLimitInBytes:0,},Windows:nil,},},Info:map[string]string{},}" file="otel-collector/interceptors.go:74" id=a4bf1a51-dcb1-4f10-aec9-b19e032076ce name=/runtime.v1.RuntimeService/ContainerStatus
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.871439103Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=ec33aff7-bd2d-47d6-9d94-7d74ae542750 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.871532283Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=ec33aff7-bd2d-47d6-9d94-7d74ae542750 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.873448566Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=6c9354c5-f103-4031-8ab8-31e5f0c38ab8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.873767579Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445458873749885,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=6c9354c5-f103-4031-8ab8-31e5f0c38ab8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.874592831Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=f1d02a45-7d9f-4550-8478-e476f7616377 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.874639450Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=f1d02a45-7d9f-4550-8478-e476f7616377 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.874822610Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444392979977448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b313,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361b64b9fbb1cf9e05383cc0962bd15479955aeafb9108888b84a9d0f8c3c92c,PodSandboxId:c90dc3736e64c0718609f8a0b208c8b3e1ef1861f5dea0656e9b0742eb25d5d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710444370797668471,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03b44efa-57c3-4ad4-869b-a23129e8aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7c2841,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13,PodSandboxId:0772f8e21adbbeac47b0f4853eb598d49fc6d6757a5fc9267dd3ebae39c8ee6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710444369801389028,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mcddh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d78c0561-04ac-4899-8a97-f3a04a1fa830,},Annotations:map[string]string{io.kubernetes.container.hash: 16798f6a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860,PodSandboxId:35e6ab99eeae6e3c2852a8734b0ed7180e4994120daecbf08f613c14996aae72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710444362180004678,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkn7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f519f9-13fd-4e04-ac
0c-c9ad8ee67cf9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0dad6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710444362132144186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b3
13,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89,PodSandboxId:53aef9fd1c6e1291e17fee7f65dd86af560857754320fa8ed4be439f329aabde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710444358486616542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30497f2d06edea48149ce57893e1893d,},Annotations:map[string]string{io.kuber
netes.container.hash: bff329bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45,PodSandboxId:ab4a8b629061d3d88950a0aa62671a30d11dd847887dfb9d15d950960fa604df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710444358422996769,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669737fe4b8bab7cba37920e4eb513d0,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 81b061a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0,PodSandboxId:32c456183e262219b7e9de43d89c6a2872222f862a30a8c10fc446a45174e41c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710444358414171223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af4fb56aaf2dd025068b4aa6814d5c0,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55,PodSandboxId:fb759e322f09b365cd7610fbbb7a76b9b14623054665795cfde07759b614dd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710444358425454703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e3e9be82e0c3e36d0567e37ffdff62,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=f1d02a45-7d9f-4550-8478-e476f7616377 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.922772851Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=653ba194-cc42-40aa-82c7-f553c7d5f935 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.922839258Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=653ba194-cc42-40aa-82c7-f553c7d5f935 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.924896257Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=1e37cbb0-e33a-4725-9fd4-437ff6dc9b90 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.925508187Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445458925484287,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=1e37cbb0-e33a-4725-9fd4-437ff6dc9b90 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.926022780Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=60af9f11-39c0-4091-a94e-0618a05568e3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.926170056Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=60af9f11-39c0-4091-a94e-0618a05568e3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.926353453Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444392979977448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b313,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361b64b9fbb1cf9e05383cc0962bd15479955aeafb9108888b84a9d0f8c3c92c,PodSandboxId:c90dc3736e64c0718609f8a0b208c8b3e1ef1861f5dea0656e9b0742eb25d5d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710444370797668471,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03b44efa-57c3-4ad4-869b-a23129e8aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7c2841,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13,PodSandboxId:0772f8e21adbbeac47b0f4853eb598d49fc6d6757a5fc9267dd3ebae39c8ee6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710444369801389028,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mcddh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d78c0561-04ac-4899-8a97-f3a04a1fa830,},Annotations:map[string]string{io.kubernetes.container.hash: 16798f6a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860,PodSandboxId:35e6ab99eeae6e3c2852a8734b0ed7180e4994120daecbf08f613c14996aae72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710444362180004678,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkn7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f519f9-13fd-4e04-ac
0c-c9ad8ee67cf9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0dad6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710444362132144186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b3
13,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89,PodSandboxId:53aef9fd1c6e1291e17fee7f65dd86af560857754320fa8ed4be439f329aabde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710444358486616542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30497f2d06edea48149ce57893e1893d,},Annotations:map[string]string{io.kuber
netes.container.hash: bff329bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45,PodSandboxId:ab4a8b629061d3d88950a0aa62671a30d11dd847887dfb9d15d950960fa604df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710444358422996769,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669737fe4b8bab7cba37920e4eb513d0,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 81b061a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0,PodSandboxId:32c456183e262219b7e9de43d89c6a2872222f862a30a8c10fc446a45174e41c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710444358414171223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af4fb56aaf2dd025068b4aa6814d5c0,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55,PodSandboxId:fb759e322f09b365cd7610fbbb7a76b9b14623054665795cfde07759b614dd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710444358425454703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e3e9be82e0c3e36d0567e37ffdff62,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=60af9f11-39c0-4091-a94e-0618a05568e3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.963530299Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=34cfeeee-f0c6-4f30-bf47-aa86dfae308e name=/runtime.v1.RuntimeService/Version
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.963628896Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=34cfeeee-f0c6-4f30-bf47-aa86dfae308e name=/runtime.v1.RuntimeService/Version
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.965283500Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cb49459b-1a43-457e-9332-cd08992af671 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.965633467Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445458965613989,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:97422,},InodesUsed:&UInt64Value{Value:48,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cb49459b-1a43-457e-9332-cd08992af671 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.966663478Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0109d75a-571e-4a22-b18b-01aac14ef76d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.966778469Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0109d75a-571e-4a22-b18b-01aac14ef76d name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:44:18 no-preload-731976 crio[692]: time="2024-03-14 19:44:18.966999886Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:2,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444392979977448,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b313,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 2,io.kubernetes.cont
ainer.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:361b64b9fbb1cf9e05383cc0962bd15479955aeafb9108888b84a9d0f8c3c92c,PodSandboxId:c90dc3736e64c0718609f8a0b208c8b3e1ef1861f5dea0656e9b0742eb25d5d9,Metadata:&ContainerMetadata{Name:busybox,Attempt:1,},Image:&ImageSpec{Image:gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c,State:CONTAINER_RUNNING,CreatedAt:1710444370797668471,Labels:map[string]string{io.kubernetes.container.name: busybox,io.kubernetes.pod.name: busybox,io.kubernetes.pod.namespace: default,io.kubernetes.pod.uid: 03b44efa-57c3-4ad4-869b-a23129e8aeb1,},Annotations:map[string]string{io.kubernetes.container.hash: ae7c2841,io.kubernetes.container.restartCount: 1,io.kubernetes.container.termination
MessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13,PodSandboxId:0772f8e21adbbeac47b0f4853eb598d49fc6d6757a5fc9267dd3ebae39c8ee6f,Metadata:&ContainerMetadata{Name:coredns,Attempt:1,},Image:&ImageSpec{Image:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4,State:CONTAINER_RUNNING,CreatedAt:1710444369801389028,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-76f75df574-mcddh,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d78c0561-04ac-4899-8a97-f3a04a1fa830,},Annotations:map[string]string{io.kubernetes.container.hash: 16798f6a,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"d
ns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860,PodSandboxId:35e6ab99eeae6e3c2852a8734b0ed7180e4994120daecbf08f613c14996aae72,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:1,},Image:&ImageSpec{Image:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834,State:CONTAINER_RUNNING,CreatedAt:1710444362180004678,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-fkn7b,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e7f519f9-13fd-4e04-ac
0c-c9ad8ee67cf9,},Annotations:map[string]string{io.kubernetes.container.hash: 3d0dad6d,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8,PodSandboxId:df59d3e5bdf722c425b2a051ea27756589f497f93e7d2b25c9b90e0533f6e04d,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:1,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_EXITED,CreatedAt:1710444362132144186,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 3907dc47-cb82-4df6-8e40-a64bf166b3
13,},Annotations:map[string]string{io.kubernetes.container.hash: 2984a647,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89,PodSandboxId:53aef9fd1c6e1291e17fee7f65dd86af560857754320fa8ed4be439f329aabde,Metadata:&ContainerMetadata{Name:etcd,Attempt:1,},Image:&ImageSpec{Image:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7,State:CONTAINER_RUNNING,CreatedAt:1710444358486616542,Labels:map[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 30497f2d06edea48149ce57893e1893d,},Annotations:map[string]string{io.kuber
netes.container.hash: bff329bf,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45,PodSandboxId:ab4a8b629061d3d88950a0aa62671a30d11dd847887dfb9d15d950960fa604df,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:1,},Image:&ImageSpec{Image:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f,State:CONTAINER_RUNNING,CreatedAt:1710444358422996769,Labels:map[string]string{io.kubernetes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 669737fe4b8bab7cba37920e4eb513d0,},Annotations:map[string]string{io.kubernetes.contain
er.hash: 81b061a4,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0,PodSandboxId:32c456183e262219b7e9de43d89c6a2872222f862a30a8c10fc446a45174e41c,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:1,},Image:&ImageSpec{Image:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d,State:CONTAINER_RUNNING,CreatedAt:1710444358414171223,Labels:map[string]string{io.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 4af4fb56aaf2dd025068b4aa6814d5c0,},Annotations:map[string]string{io.kube
rnetes.container.hash: f18bb92e,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55,PodSandboxId:fb759e322f09b365cd7610fbbb7a76b9b14623054665795cfde07759b614dd4d,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:1,},Image:&ImageSpec{Image:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210,State:CONTAINER_RUNNING,CreatedAt:1710444358425454703,Labels:map[string]string{io.kubernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-no-preload-731976,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: a3e3e9be82e0c3e36d0567e37ffdff62,},Annotations:map[string]string{io.kubernetes.contai
ner.hash: 7d8a0274,io.kubernetes.container.restartCount: 1,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0109d75a-571e-4a22-b18b-01aac14ef76d name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                                                                 CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	aeed99a1392ec       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      17 minutes ago      Running             storage-provisioner       2                   df59d3e5bdf72       storage-provisioner
	361b64b9fbb1c       gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e   18 minutes ago      Running             busybox                   1                   c90dc3736e64c       busybox
	ec0841c5bdfb8       cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4                                      18 minutes ago      Running             coredns                   1                   0772f8e21adbb       coredns-76f75df574-mcddh
	3a8800127b849       cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834                                      18 minutes ago      Running             kube-proxy                1                   35e6ab99eeae6       kube-proxy-fkn7b
	27e79a384706c       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562                                      18 minutes ago      Exited              storage-provisioner       1                   df59d3e5bdf72       storage-provisioner
	db597de214816       a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7                                      18 minutes ago      Running             etcd                      1                   53aef9fd1c6e1       etcd-no-preload-731976
	5b8e529f94562       4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210                                      18 minutes ago      Running             kube-scheduler            1                   fb759e322f09b       kube-scheduler-no-preload-731976
	a09531e613ae5       bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f                                      18 minutes ago      Running             kube-apiserver            1                   ab4a8b629061d       kube-apiserver-no-preload-731976
	9151eb0c1b33c       d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d                                      18 minutes ago      Running             kube-controller-manager   1                   32c456183e262       kube-controller-manager-no-preload-731976
	
	
	==> coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 6c8bd46af3d98e03c4ae8e438c65dd0c69a5f817565481bcf1725dd66ff794963b7938c81e3a23d4c2ad9e52f818076e819219c79e8007dd90564767ed68ba4c
	CoreDNS-1.11.1
	linux/amd64, go1.20.7, ae2bbc2
	[INFO] 127.0.0.1:48549 - 19461 "HINFO IN 4591029028017746442.7322755858764164589. udp 57 false 512" NXDOMAIN qr,rd,ra 57 0.029339935s
	
	
	==> describe nodes <==
	Name:               no-preload-731976
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=no-preload-731976
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=no-preload-731976
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T19_15_52_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:15:48 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  no-preload-731976
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:44:13 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:41:50 +0000   Thu, 14 Mar 2024 19:15:46 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:41:50 +0000   Thu, 14 Mar 2024 19:15:46 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:41:50 +0000   Thu, 14 Mar 2024 19:15:46 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:41:50 +0000   Thu, 14 Mar 2024 19:26:11 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.39.148
	  Hostname:    no-preload-731976
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 ca284e76bb3247e9805a6c098215c046
	  System UUID:                ca284e76-bb32-47e9-805a-6c098215c046
	  Boot ID:                    ec12a515-8905-41c1-8a1d-83d8375cab5a
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.29.0-rc.2
	  Kube-Proxy Version:         v1.29.0-rc.2
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                         CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                         ------------  ----------  ---------------  -------------  ---
	  default                     busybox                                      0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         27m
	  kube-system                 coredns-76f75df574-mcddh                     100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     28m
	  kube-system                 etcd-no-preload-731976                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         28m
	  kube-system                 kube-apiserver-no-preload-731976             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-controller-manager-no-preload-731976    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-proxy-fkn7b                             0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 kube-scheduler-no-preload-731976             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	  kube-system                 metrics-server-57f55c9bc5-rhg5r              100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         27m
	  kube-system                 storage-provisioner                          0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         28m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                850m (42%!)(MISSING)   0 (0%!)(MISSING)
	  memory             370Mi (17%!)(MISSING)  170Mi (8%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 28m                kube-proxy       
	  Normal  Starting                 18m                kube-proxy       
	  Normal  NodeAllocatableEnforced  28m                kubelet          Updated Node Allocatable limit across pods
	  Normal  NodeHasSufficientMemory  28m                kubelet          Node no-preload-731976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    28m                kubelet          Node no-preload-731976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     28m                kubelet          Node no-preload-731976 status is now: NodeHasSufficientPID
	  Normal  NodeReady                28m                kubelet          Node no-preload-731976 status is now: NodeReady
	  Normal  Starting                 28m                kubelet          Starting kubelet.
	  Normal  RegisteredNode           28m                node-controller  Node no-preload-731976 event: Registered Node no-preload-731976 in Controller
	  Normal  Starting                 18m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  18m (x8 over 18m)  kubelet          Node no-preload-731976 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    18m (x8 over 18m)  kubelet          Node no-preload-731976 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     18m (x7 over 18m)  kubelet          Node no-preload-731976 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  18m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           18m                node-controller  Node no-preload-731976 event: Registered Node no-preload-731976 in Controller
	
	
	==> dmesg <==
	[Mar14 19:25] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000000] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000001] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056178] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.047988] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.870385] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.687564] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.739459] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[  +6.516277] systemd-fstab-generator[610]: Ignoring "noauto" option for root device
	[  +0.072805] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.080686] systemd-fstab-generator[622]: Ignoring "noauto" option for root device
	[  +0.173767] systemd-fstab-generator[636]: Ignoring "noauto" option for root device
	[  +0.142530] systemd-fstab-generator[648]: Ignoring "noauto" option for root device
	[  +0.262325] systemd-fstab-generator[677]: Ignoring "noauto" option for root device
	[ +17.382411] systemd-fstab-generator[1192]: Ignoring "noauto" option for root device
	[  +0.067518] kauditd_printk_skb: 130 callbacks suppressed
	[  +2.182667] systemd-fstab-generator[1316]: Ignoring "noauto" option for root device
	[Mar14 19:26] kauditd_printk_skb: 100 callbacks suppressed
	[  +2.508221] systemd-fstab-generator[1916]: Ignoring "noauto" option for root device
	[  +4.197170] kauditd_printk_skb: 56 callbacks suppressed
	[  +5.268522] kauditd_printk_skb: 30 callbacks suppressed
	
	
	==> etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] <==
	{"level":"info","ts":"2024-03-14T19:25:59.110316Z","caller":"embed/etcd.go:569","msg":"cmux::serve","address":"192.168.39.148:2380"}
	{"level":"info","ts":"2024-03-14T19:25:59.111264Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa switched to configuration voters=(10675872347264799914)"}
	{"level":"info","ts":"2024-03-14T19:25:59.111336Z","caller":"membership/cluster.go:421","msg":"added member","cluster-id":"8bee001f44ea94","local-member-id":"942851562e2254aa","added-peer-id":"942851562e2254aa","added-peer-peer-urls":["https://192.168.39.148:2380"]}
	{"level":"info","ts":"2024-03-14T19:25:59.111445Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"8bee001f44ea94","local-member-id":"942851562e2254aa","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:25:59.111516Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:26:00.138816Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa is starting a new election at term 2"}
	{"level":"info","ts":"2024-03-14T19:26:00.138925Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa became pre-candidate at term 2"}
	{"level":"info","ts":"2024-03-14T19:26:00.138972Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa received MsgPreVoteResp from 942851562e2254aa at term 2"}
	{"level":"info","ts":"2024-03-14T19:26:00.139002Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa became candidate at term 3"}
	{"level":"info","ts":"2024-03-14T19:26:00.139111Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa received MsgVoteResp from 942851562e2254aa at term 3"}
	{"level":"info","ts":"2024-03-14T19:26:00.139157Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"942851562e2254aa became leader at term 3"}
	{"level":"info","ts":"2024-03-14T19:26:00.139183Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: 942851562e2254aa elected leader 942851562e2254aa at term 3"}
	{"level":"info","ts":"2024-03-14T19:26:00.140819Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"942851562e2254aa","local-member-attributes":"{Name:no-preload-731976 ClientURLs:[https://192.168.39.148:2379]}","request-path":"/0/members/942851562e2254aa/attributes","cluster-id":"8bee001f44ea94","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T19:26:00.140867Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:26:00.141396Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:26:00.141688Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T19:26:00.141728Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T19:26:00.1433Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T19:26:00.143465Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.39.148:2379"}
	{"level":"info","ts":"2024-03-14T19:36:00.171598Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":846}
	{"level":"info","ts":"2024-03-14T19:36:00.175443Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":846,"took":"3.506881ms","hash":1364715535}
	{"level":"info","ts":"2024-03-14T19:36:00.17559Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1364715535,"revision":846,"compact-revision":-1}
	{"level":"info","ts":"2024-03-14T19:41:00.180177Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":1088}
	{"level":"info","ts":"2024-03-14T19:41:00.182538Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":1088,"took":"1.687836ms","hash":3479361779}
	{"level":"info","ts":"2024-03-14T19:41:00.182631Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":3479361779,"revision":1088,"compact-revision":846}
	
	
	==> kernel <==
	 19:44:19 up 18 min,  0 users,  load average: 0.04, 0.14, 0.13
	Linux no-preload-731976 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] <==
	I0314 19:39:02.684497       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:41:01.685854       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:41:01.686226       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	W0314 19:41:02.686359       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:41:02.686412       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:41:02.686421       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:41:02.686535       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:41:02.686615       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:41:02.687587       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:42:02.687509       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:42:02.687885       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:42:02.687919       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:42:02.687850       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:42:02.687991       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:42:02.688947       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:44:02.688912       1 handler_proxy.go:93] no RequestInfo found in the context
	W0314 19:44:02.689362       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:44:02.689446       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:44:02.689472       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	E0314 19:44:02.689546       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:44:02.691365       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] <==
	I0314 19:38:44.707891       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:39:14.294744       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:39:14.716413       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:39:44.300563       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:39:44.725830       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:40:14.305841       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:40:14.735883       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:40:44.311194       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:40:44.745471       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:41:14.317505       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:41:14.753336       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:41:44.324280       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:41:44.761888       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:42:14.330168       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:42:14.769959       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 19:42:27.771893       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="343.402µs"
	I0314 19:42:38.765512       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="78.231µs"
	E0314 19:42:44.335588       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:42:44.778583       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:43:14.341461       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:43:14.786582       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:43:44.347100       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:43:44.796355       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:44:14.353736       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:44:14.804110       1 garbagecollector.go:835] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] <==
	I0314 19:26:02.341212       1 server_others.go:72] "Using iptables proxy"
	I0314 19:26:02.352720       1 server.go:1050] "Successfully retrieved node IP(s)" IPs=["192.168.39.148"]
	I0314 19:26:02.396641       1 server_others.go:146] "No iptables support for family" ipFamily="IPv6"
	I0314 19:26:02.396658       1 server.go:654] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:26:02.396668       1 server_others.go:168] "Using iptables Proxier"
	I0314 19:26:02.400441       1 proxier.go:246] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:26:02.400745       1 server.go:865] "Version info" version="v1.29.0-rc.2"
	I0314 19:26:02.400759       1 server.go:867] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:26:02.401977       1 config.go:188] "Starting service config controller"
	I0314 19:26:02.402132       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:26:02.402185       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:26:02.402211       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:26:02.406423       1 config.go:315] "Starting node config controller"
	I0314 19:26:02.406481       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:26:02.503253       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:26:02.503333       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:26:02.506579       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] <==
	I0314 19:25:59.374195       1 serving.go:380] Generated self-signed cert in-memory
	W0314 19:26:01.604735       1 requestheader_controller.go:193] Unable to get configmap/extension-apiserver-authentication in kube-system.  Usually fixed by 'kubectl create rolebinding -n kube-system ROLEBINDING_NAME --role=extension-apiserver-authentication-reader --serviceaccount=YOUR_NS:YOUR_SA'
	W0314 19:26:01.604792       1 authentication.go:368] Error looking up in-cluster authentication configuration: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot get resource "configmaps" in API group "" in the namespace "kube-system"
	W0314 19:26:01.604812       1 authentication.go:369] Continuing without authentication configuration. This may treat all requests as anonymous.
	W0314 19:26:01.604818       1 authentication.go:370] To require authentication configuration lookup to succeed, set --authentication-tolerate-lookup-failure=false
	I0314 19:26:01.637783       1 server.go:154] "Starting Kubernetes Scheduler" version="v1.29.0-rc.2"
	I0314 19:26:01.637853       1 server.go:156] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:26:01.642580       1 secure_serving.go:213] Serving securely on 127.0.0.1:10259
	I0314 19:26:01.642741       1 configmap_cafile_content.go:202] "Starting controller" name="client-ca::kube-system::extension-apiserver-authentication::client-ca-file"
	I0314 19:26:01.642781       1 shared_informer.go:311] Waiting for caches to sync for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	I0314 19:26:01.642797       1 tlsconfig.go:240] "Starting DynamicServingCertificateController"
	I0314 19:26:01.743422       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 19:41:57 no-preload-731976 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:42:04 no-preload-731976 kubelet[1323]: E0314 19:42:04.747632    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:42:15 no-preload-731976 kubelet[1323]: E0314 19:42:15.764468    1323 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 19:42:15 no-preload-731976 kubelet[1323]: E0314 19:42:15.764535    1323 kuberuntime_image.go:55] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 19:42:15 no-preload-731976 kubelet[1323]: E0314 19:42:15.764744    1323 kuberuntime_manager.go:1262] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-skkwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe:&Probe{Pro
beHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:Fi
le,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-rhg5r_kube-system(5753b397-3b41-4fa7-8f7f-65db44a90b06): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 14 19:42:15 no-preload-731976 kubelet[1323]: E0314 19:42:15.764794    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:42:27 no-preload-731976 kubelet[1323]: E0314 19:42:27.747828    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:42:38 no-preload-731976 kubelet[1323]: E0314 19:42:38.747125    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:42:52 no-preload-731976 kubelet[1323]: E0314 19:42:52.747007    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:42:57 no-preload-731976 kubelet[1323]: E0314 19:42:57.789350    1323 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:42:57 no-preload-731976 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:42:57 no-preload-731976 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:42:57 no-preload-731976 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:42:57 no-preload-731976 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:43:05 no-preload-731976 kubelet[1323]: E0314 19:43:05.747210    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:43:20 no-preload-731976 kubelet[1323]: E0314 19:43:20.747414    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:43:35 no-preload-731976 kubelet[1323]: E0314 19:43:35.747625    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:43:50 no-preload-731976 kubelet[1323]: E0314 19:43:50.748614    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:43:57 no-preload-731976 kubelet[1323]: E0314 19:43:57.787899    1323 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:43:57 no-preload-731976 kubelet[1323]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:43:57 no-preload-731976 kubelet[1323]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:43:57 no-preload-731976 kubelet[1323]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:43:57 no-preload-731976 kubelet[1323]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:44:03 no-preload-731976 kubelet[1323]: E0314 19:44:03.748747    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	Mar 14 19:44:15 no-preload-731976 kubelet[1323]: E0314 19:44:15.746906    1323 pod_workers.go:1298] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-rhg5r" podUID="5753b397-3b41-4fa7-8f7f-65db44a90b06"
	
	
	==> storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] <==
	I0314 19:26:02.268142       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	F0314 19:26:32.275453       1 main.go:39] error getting server version: Get "https://10.96.0.1:443/version?timeout=32s": dial tcp 10.96.0.1:443: i/o timeout
	
	
	==> storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] <==
	I0314 19:26:33.073620       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 19:26:33.086947       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 19:26:33.087141       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 19:26:50.490890       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 19:26:50.491251       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_no-preload-731976_648f2eb3-e49c-4ead-8b3b-df039241b979!
	I0314 19:26:50.491724       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"f0b07f95-d600-4dbe-9530-da031d2a9224", APIVersion:"v1", ResourceVersion:"626", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' no-preload-731976_648f2eb3-e49c-4ead-8b3b-df039241b979 became leader
	I0314 19:26:50.592385       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_no-preload-731976_648f2eb3-e49c-4ead-8b3b-df039241b979!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p no-preload-731976 -n no-preload-731976
helpers_test.go:261: (dbg) Run:  kubectl --context no-preload-731976 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-rhg5r
helpers_test.go:274: ======> post-mortem[TestStartStop/group/no-preload/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context no-preload-731976 describe pod metrics-server-57f55c9bc5-rhg5r
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context no-preload-731976 describe pod metrics-server-57f55c9bc5-rhg5r: exit status 1 (74.213202ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-rhg5r" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context no-preload-731976 describe pod metrics-server-57f55c9bc5-rhg5r: exit status 1
--- FAIL: TestStartStop/group/no-preload/serial/AddonExistsAfterStop (286.49s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (424.76s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
start_stop_delete_test.go:287: ***** TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341
start_stop_delete_test.go:287: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop: showing logs for failed pods as of 2024-03-14 19:46:44.430682279 +0000 UTC m=+6130.916541169
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context default-k8s-diff-port-440341 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-440341 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.218µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context default-k8s-diff-port-440341 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341
helpers_test.go:244: <<< TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p default-k8s-diff-port-440341 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p default-k8s-diff-port-440341 logs -n 25: (1.731596194s)
helpers_test.go:252: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| Command |                         Args                         |    Profile    |  User   | Version |     Start Time      |      End Time       |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	| ssh     | -p auto-058224 sudo systemctl                        | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | status kubelet --all --full                          |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo systemctl                        | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | cat kubelet --no-pager                               |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo journalctl                       | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | -xeu kubelet --all --full                            |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo cat                              | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | /etc/kubernetes/kubelet.conf                         |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo cat                              | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | /var/lib/kubelet/config.yaml                         |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo systemctl                        | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC |                     |
	|         | status docker --all --full                           |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo systemctl                        | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | cat docker --no-pager                                |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo cat                              | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | /etc/docker/daemon.json                              |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo docker                           | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC |                     |
	|         | system info                                          |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo systemctl                        | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC |                     |
	|         | status cri-docker --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo systemctl                        | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | cat cri-docker --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo cat                              | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC |                     |
	|         | /etc/systemd/system/cri-docker.service.d/10-cni.conf |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo cat                              | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | /usr/lib/systemd/system/cri-docker.service           |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo                                  | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | cri-dockerd --version                                |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo systemctl                        | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC |                     |
	|         | status containerd --all --full                       |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo systemctl                        | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | cat containerd --no-pager                            |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo cat                              | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | /lib/systemd/system/containerd.service               |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo cat                              | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | /etc/containerd/config.toml                          |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo containerd                       | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | config dump                                          |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo systemctl                        | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | status crio --all --full                             |               |         |         |                     |                     |
	|         | --no-pager                                           |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo systemctl                        | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | cat crio --no-pager                                  |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo find                             | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | /etc/crio -type f -exec sh -c                        |               |         |         |                     |                     |
	|         | 'echo {}; cat {}' \;                                 |               |         |         |                     |                     |
	| ssh     | -p auto-058224 sudo crio                             | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	|         | config                                               |               |         |         |                     |                     |
	| delete  | -p auto-058224                                       | auto-058224   | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC | 14 Mar 24 19:46 UTC |
	| start   | -p bridge-058224 --memory=3072                       | bridge-058224 | jenkins | v1.32.0 | 14 Mar 24 19:46 UTC |                     |
	|         | --alsologtostderr --wait=true                        |               |         |         |                     |                     |
	|         | --wait-timeout=15m                                   |               |         |         |                     |                     |
	|         | --cni=bridge --driver=kvm2                           |               |         |         |                     |                     |
	|         | --container-runtime=crio                             |               |         |         |                     |                     |
	|---------|------------------------------------------------------|---------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:46:33
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:46:33.546364 1000751 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:46:33.546829 1000751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:46:33.546843 1000751 out.go:304] Setting ErrFile to fd 2...
	I0314 19:46:33.546851 1000751 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:46:33.547346 1000751 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:46:33.548711 1000751 out.go:298] Setting JSON to false
	I0314 19:46:33.550023 1000751 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":98946,"bootTime":1710346648,"procs":301,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:46:33.550098 1000751 start.go:139] virtualization: kvm guest
	I0314 19:46:33.552176 1000751 out.go:177] * [bridge-058224] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:46:33.553825 1000751 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:46:33.553783 1000751 notify.go:220] Checking for updates...
	I0314 19:46:33.555295 1000751 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:46:33.556765 1000751 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:46:33.558151 1000751 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:46:33.559453 1000751 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:46:33.560674 1000751 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:46:33.562242 1000751 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:46:33.562338 1000751 config.go:182] Loaded profile config "enable-default-cni-058224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:46:33.562429 1000751 config.go:182] Loaded profile config "flannel-058224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:46:33.562539 1000751 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:46:33.601060 1000751 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 19:46:33.602267 1000751 start.go:297] selected driver: kvm2
	I0314 19:46:33.602283 1000751 start.go:901] validating driver "kvm2" against <nil>
	I0314 19:46:33.602294 1000751 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:46:33.603001 1000751 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:46:33.603068 1000751 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:46:33.623135 1000751 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:46:33.623205 1000751 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 19:46:33.623496 1000751 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:46:33.623565 1000751 cni.go:84] Creating CNI manager for "bridge"
	I0314 19:46:33.623574 1000751 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 19:46:33.623631 1000751 start.go:340] cluster config:
	{Name:bridge-058224 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:bridge-058224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:c
rio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock
: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:46:33.623742 1000751 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:46:33.625601 1000751 out.go:177] * Starting "bridge-058224" primary control-plane node in "bridge-058224" cluster
	I0314 19:46:33.627019 1000751 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:46:33.627062 1000751 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 19:46:33.627073 1000751 cache.go:56] Caching tarball of preloaded images
	I0314 19:46:33.627155 1000751 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:46:33.627166 1000751 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 19:46:33.627256 1000751 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/bridge-058224/config.json ...
	I0314 19:46:33.627271 1000751 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/bridge-058224/config.json: {Name:mk859270741b3a09cfe710e692d5f4ada0dcbf45 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:46:33.627442 1000751 start.go:360] acquireMachinesLock for bridge-058224: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:46:33.627480 1000751 start.go:364] duration metric: took 21.929µs to acquireMachinesLock for "bridge-058224"
	I0314 19:46:33.627497 1000751 start.go:93] Provisioning new machine with config: &{Name:bridge-058224 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:bridge-058224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountP
ort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} &{Name: IP: Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:46:33.627569 1000751 start.go:125] createHost starting for "" (driver="kvm2")
	I0314 19:46:33.257402  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | Using libvirt version 6000000
	I0314 19:46:33.259887  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.260456  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:eb:92", ip: ""} in network mk-enable-default-cni-058224: {Iface:virbr4 ExpiryTime:2024-03-14 20:46:24 +0000 UTC Type:0 Mac:52:54:00:e8:eb:92 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:enable-default-cni-058224 Clientid:01:52:54:00:e8:eb:92}
	I0314 19:46:33.260491  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined IP address 192.168.72.199 and MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.260685  999128 main.go:141] libmachine: Docker is up and running!
	I0314 19:46:33.260707  999128 main.go:141] libmachine: Reticulating splines...
	I0314 19:46:33.260716  999128 client.go:171] duration metric: took 27.177649237s to LocalClient.Create
	I0314 19:46:33.260748  999128 start.go:167] duration metric: took 27.177728007s to libmachine.API.Create "enable-default-cni-058224"
	I0314 19:46:33.260761  999128 start.go:293] postStartSetup for "enable-default-cni-058224" (driver="kvm2")
	I0314 19:46:33.260775  999128 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:46:33.260800  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .DriverName
	I0314 19:46:33.261072  999128 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:46:33.261174  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHHostname
	I0314 19:46:33.263708  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.264119  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:eb:92", ip: ""} in network mk-enable-default-cni-058224: {Iface:virbr4 ExpiryTime:2024-03-14 20:46:24 +0000 UTC Type:0 Mac:52:54:00:e8:eb:92 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:enable-default-cni-058224 Clientid:01:52:54:00:e8:eb:92}
	I0314 19:46:33.264151  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined IP address 192.168.72.199 and MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.264269  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHPort
	I0314 19:46:33.264469  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHKeyPath
	I0314 19:46:33.264643  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHUsername
	I0314 19:46:33.264788  999128 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/enable-default-cni-058224/id_rsa Username:docker}
	I0314 19:46:33.350746  999128 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:46:33.356668  999128 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:46:33.356696  999128 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:46:33.356766  999128 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:46:33.356876  999128 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:46:33.357029  999128 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:46:33.367967  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:46:33.402167  999128 start.go:296] duration metric: took 141.389634ms for postStartSetup
	I0314 19:46:33.402230  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetConfigRaw
	I0314 19:46:33.402852  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetIP
	I0314 19:46:33.406240  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.406659  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:eb:92", ip: ""} in network mk-enable-default-cni-058224: {Iface:virbr4 ExpiryTime:2024-03-14 20:46:24 +0000 UTC Type:0 Mac:52:54:00:e8:eb:92 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:enable-default-cni-058224 Clientid:01:52:54:00:e8:eb:92}
	I0314 19:46:33.406692  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined IP address 192.168.72.199 and MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.406972  999128 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/config.json ...
	I0314 19:46:33.407203  999128 start.go:128] duration metric: took 27.348757296s to createHost
	I0314 19:46:33.407234  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHHostname
	I0314 19:46:33.409718  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.410211  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:eb:92", ip: ""} in network mk-enable-default-cni-058224: {Iface:virbr4 ExpiryTime:2024-03-14 20:46:24 +0000 UTC Type:0 Mac:52:54:00:e8:eb:92 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:enable-default-cni-058224 Clientid:01:52:54:00:e8:eb:92}
	I0314 19:46:33.410255  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined IP address 192.168.72.199 and MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.410383  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHPort
	I0314 19:46:33.410573  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHKeyPath
	I0314 19:46:33.410755  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHKeyPath
	I0314 19:46:33.410893  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHUsername
	I0314 19:46:33.411087  999128 main.go:141] libmachine: Using SSH client type: native
	I0314 19:46:33.411324  999128 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.199 22 <nil> <nil>}
	I0314 19:46:33.411341  999128 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:46:33.522826  999128 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710445593.502364588
	
	I0314 19:46:33.522851  999128 fix.go:216] guest clock: 1710445593.502364588
	I0314 19:46:33.522860  999128 fix.go:229] Guest: 2024-03-14 19:46:33.502364588 +0000 UTC Remote: 2024-03-14 19:46:33.407218168 +0000 UTC m=+35.333050353 (delta=95.14642ms)
	I0314 19:46:33.522905  999128 fix.go:200] guest clock delta is within tolerance: 95.14642ms
	I0314 19:46:33.522917  999128 start.go:83] releasing machines lock for "enable-default-cni-058224", held for 27.464626855s
	I0314 19:46:33.522943  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .DriverName
	I0314 19:46:33.523286  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetIP
	I0314 19:46:33.526547  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.527180  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:eb:92", ip: ""} in network mk-enable-default-cni-058224: {Iface:virbr4 ExpiryTime:2024-03-14 20:46:24 +0000 UTC Type:0 Mac:52:54:00:e8:eb:92 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:enable-default-cni-058224 Clientid:01:52:54:00:e8:eb:92}
	I0314 19:46:33.527207  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined IP address 192.168.72.199 and MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.527342  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .DriverName
	I0314 19:46:33.527808  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .DriverName
	I0314 19:46:33.527989  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .DriverName
	I0314 19:46:33.528062  999128 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:46:33.528103  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHHostname
	I0314 19:46:33.528229  999128 ssh_runner.go:195] Run: cat /version.json
	I0314 19:46:33.528281  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHHostname
	I0314 19:46:33.530780  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.531151  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:eb:92", ip: ""} in network mk-enable-default-cni-058224: {Iface:virbr4 ExpiryTime:2024-03-14 20:46:24 +0000 UTC Type:0 Mac:52:54:00:e8:eb:92 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:enable-default-cni-058224 Clientid:01:52:54:00:e8:eb:92}
	I0314 19:46:33.531177  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined IP address 192.168.72.199 and MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.531390  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHPort
	I0314 19:46:33.533775  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.534182  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:eb:92", ip: ""} in network mk-enable-default-cni-058224: {Iface:virbr4 ExpiryTime:2024-03-14 20:46:24 +0000 UTC Type:0 Mac:52:54:00:e8:eb:92 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:enable-default-cni-058224 Clientid:01:52:54:00:e8:eb:92}
	I0314 19:46:33.534205  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined IP address 192.168.72.199 and MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:33.534340  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHPort
	I0314 19:46:33.534411  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHKeyPath
	I0314 19:46:33.534452  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHKeyPath
	I0314 19:46:33.534486  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHUsername
	I0314 19:46:33.534526  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetSSHUsername
	I0314 19:46:33.534573  999128 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/enable-default-cni-058224/id_rsa Username:docker}
	I0314 19:46:33.534905  999128 sshutil.go:53] new ssh client: &{IP:192.168.72.199 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/enable-default-cni-058224/id_rsa Username:docker}
	I0314 19:46:33.639488  999128 ssh_runner.go:195] Run: systemctl --version
	I0314 19:46:33.648023  999128 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:46:33.834275  999128 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:46:33.842348  999128 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:46:33.842412  999128 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:46:33.860586  999128 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:46:33.860612  999128 start.go:494] detecting cgroup driver to use...
	I0314 19:46:33.860674  999128 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:46:33.883633  999128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:46:33.903573  999128 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:46:33.903630  999128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:46:33.918548  999128 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:46:33.937995  999128 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:46:34.068347  999128 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:46:34.237529  999128 docker.go:233] disabling docker service ...
	I0314 19:46:34.237634  999128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:46:34.255445  999128 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:46:34.271365  999128 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:46:34.428534  999128 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:46:34.583038  999128 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:46:34.600260  999128 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:46:34.621340  999128 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:46:34.621416  999128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:46:34.634870  999128 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:46:34.634957  999128 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:46:34.648899  999128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:46:34.664386  999128 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:46:34.677198  999128 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:46:34.690616  999128 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:46:34.702183  999128 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:46:34.702248  999128 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:46:34.717643  999128 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:46:34.728992  999128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:46:34.869167  999128 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:46:35.043329  999128 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:46:35.043418  999128 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:46:35.050312  999128 start.go:562] Will wait 60s for crictl version
	I0314 19:46:35.050361  999128 ssh_runner.go:195] Run: which crictl
	I0314 19:46:35.054861  999128 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:46:35.096594  999128 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:46:35.096688  999128 ssh_runner.go:195] Run: crio --version
	I0314 19:46:35.129839  999128 ssh_runner.go:195] Run: crio --version
	I0314 19:46:35.172958  999128 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:46:32.837265  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:33.337445  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:33.836894  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:34.336875  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:34.837355  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:35.337474  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:35.837421  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:36.337435  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:36.837441  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:37.337436  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:35.174410  999128 main.go:141] libmachine: (enable-default-cni-058224) Calling .GetIP
	I0314 19:46:35.177644  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:35.178170  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:e8:eb:92", ip: ""} in network mk-enable-default-cni-058224: {Iface:virbr4 ExpiryTime:2024-03-14 20:46:24 +0000 UTC Type:0 Mac:52:54:00:e8:eb:92 Iaid: IPaddr:192.168.72.199 Prefix:24 Hostname:enable-default-cni-058224 Clientid:01:52:54:00:e8:eb:92}
	I0314 19:46:35.178197  999128 main.go:141] libmachine: (enable-default-cni-058224) DBG | domain enable-default-cni-058224 has defined IP address 192.168.72.199 and MAC address 52:54:00:e8:eb:92 in network mk-enable-default-cni-058224
	I0314 19:46:35.178523  999128 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 19:46:35.183602  999128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:46:35.202289  999128 kubeadm.go:877] updating cluster {Name:enable-default-cni-058224 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:enable-default-cni-058224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 Mou
ntOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:46:35.202456  999128 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:46:35.202521  999128 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:46:35.242944  999128 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:46:35.243029  999128 ssh_runner.go:195] Run: which lz4
	I0314 19:46:35.247882  999128 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:46:35.253781  999128 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:46:35.253824  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:46:37.301789  999128 crio.go:444] duration metric: took 2.053932167s to copy over tarball
	I0314 19:46:37.301900  999128 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:46:33.629268 1000751 out.go:204] * Creating kvm2 VM (CPUs=2, Memory=3072MB, Disk=20000MB) ...
	I0314 19:46:33.629434 1000751 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:46:33.629470 1000751 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:46:33.645932 1000751 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33335
	I0314 19:46:33.646378 1000751 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:46:33.646998 1000751 main.go:141] libmachine: Using API Version  1
	I0314 19:46:33.647021 1000751 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:46:33.647352 1000751 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:46:33.647598 1000751 main.go:141] libmachine: (bridge-058224) Calling .GetMachineName
	I0314 19:46:33.647755 1000751 main.go:141] libmachine: (bridge-058224) Calling .DriverName
	I0314 19:46:33.647908 1000751 start.go:159] libmachine.API.Create for "bridge-058224" (driver="kvm2")
	I0314 19:46:33.647933 1000751 client.go:168] LocalClient.Create starting
	I0314 19:46:33.647967 1000751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem
	I0314 19:46:33.648015 1000751 main.go:141] libmachine: Decoding PEM data...
	I0314 19:46:33.648041 1000751 main.go:141] libmachine: Parsing certificate...
	I0314 19:46:33.648108 1000751 main.go:141] libmachine: Reading certificate data from /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem
	I0314 19:46:33.648136 1000751 main.go:141] libmachine: Decoding PEM data...
	I0314 19:46:33.648158 1000751 main.go:141] libmachine: Parsing certificate...
	I0314 19:46:33.648177 1000751 main.go:141] libmachine: Running pre-create checks...
	I0314 19:46:33.648191 1000751 main.go:141] libmachine: (bridge-058224) Calling .PreCreateCheck
	I0314 19:46:33.648673 1000751 main.go:141] libmachine: (bridge-058224) Calling .GetConfigRaw
	I0314 19:46:33.649107 1000751 main.go:141] libmachine: Creating machine...
	I0314 19:46:33.649126 1000751 main.go:141] libmachine: (bridge-058224) Calling .Create
	I0314 19:46:33.649275 1000751 main.go:141] libmachine: (bridge-058224) Creating KVM machine...
	I0314 19:46:33.650799 1000751 main.go:141] libmachine: (bridge-058224) DBG | found existing default KVM network
	I0314 19:46:33.652282 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:33.652106 1000775 network.go:206] using free private subnet 192.168.39.0/24: &{IP:192.168.39.0 Netmask:255.255.255.0 Prefix:24 CIDR:192.168.39.0/24 Gateway:192.168.39.1 ClientMin:192.168.39.2 ClientMax:192.168.39.254 Broadcast:192.168.39.255 IsPrivate:true Interface:{IfaceName: IfaceIPv4: IfaceMTU:0 IfaceMAC:} reservation:0xc00012dfc0}
	I0314 19:46:33.652310 1000751 main.go:141] libmachine: (bridge-058224) DBG | created network xml: 
	I0314 19:46:33.652324 1000751 main.go:141] libmachine: (bridge-058224) DBG | <network>
	I0314 19:46:33.652332 1000751 main.go:141] libmachine: (bridge-058224) DBG |   <name>mk-bridge-058224</name>
	I0314 19:46:33.652347 1000751 main.go:141] libmachine: (bridge-058224) DBG |   <dns enable='no'/>
	I0314 19:46:33.652354 1000751 main.go:141] libmachine: (bridge-058224) DBG |   
	I0314 19:46:33.652365 1000751 main.go:141] libmachine: (bridge-058224) DBG |   <ip address='192.168.39.1' netmask='255.255.255.0'>
	I0314 19:46:33.652375 1000751 main.go:141] libmachine: (bridge-058224) DBG |     <dhcp>
	I0314 19:46:33.652388 1000751 main.go:141] libmachine: (bridge-058224) DBG |       <range start='192.168.39.2' end='192.168.39.253'/>
	I0314 19:46:33.652395 1000751 main.go:141] libmachine: (bridge-058224) DBG |     </dhcp>
	I0314 19:46:33.652401 1000751 main.go:141] libmachine: (bridge-058224) DBG |   </ip>
	I0314 19:46:33.652409 1000751 main.go:141] libmachine: (bridge-058224) DBG |   
	I0314 19:46:33.652415 1000751 main.go:141] libmachine: (bridge-058224) DBG | </network>
	I0314 19:46:33.652422 1000751 main.go:141] libmachine: (bridge-058224) DBG | 
	I0314 19:46:33.657463 1000751 main.go:141] libmachine: (bridge-058224) DBG | trying to create private KVM network mk-bridge-058224 192.168.39.0/24...
	I0314 19:46:33.739689 1000751 main.go:141] libmachine: (bridge-058224) Setting up store path in /home/jenkins/minikube-integration/18384-942544/.minikube/machines/bridge-058224 ...
	I0314 19:46:33.739743 1000751 main.go:141] libmachine: (bridge-058224) Building disk image from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 19:46:33.739755 1000751 main.go:141] libmachine: (bridge-058224) DBG | private KVM network mk-bridge-058224 192.168.39.0/24 created
	I0314 19:46:33.739775 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:33.739621 1000775 common.go:145] Making disk image using store path: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:46:33.739895 1000751 main.go:141] libmachine: (bridge-058224) Downloading /home/jenkins/minikube-integration/18384-942544/.minikube/cache/boot2docker.iso from file:///home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso...
	I0314 19:46:34.010053 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:34.009919 1000775 common.go:152] Creating ssh key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/bridge-058224/id_rsa...
	I0314 19:46:34.294876 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:34.294754 1000775 common.go:158] Creating raw disk image: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/bridge-058224/bridge-058224.rawdisk...
	I0314 19:46:34.294916 1000751 main.go:141] libmachine: (bridge-058224) DBG | Writing magic tar header
	I0314 19:46:34.294931 1000751 main.go:141] libmachine: (bridge-058224) DBG | Writing SSH key tar header
	I0314 19:46:34.294943 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:34.294892 1000775 common.go:172] Fixing permissions on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/bridge-058224 ...
	I0314 19:46:34.295017 1000751 main.go:141] libmachine: (bridge-058224) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/bridge-058224
	I0314 19:46:34.295048 1000751 main.go:141] libmachine: (bridge-058224) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube/machines
	I0314 19:46:34.295076 1000751 main.go:141] libmachine: (bridge-058224) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines/bridge-058224 (perms=drwx------)
	I0314 19:46:34.295095 1000751 main.go:141] libmachine: (bridge-058224) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube/machines (perms=drwxr-xr-x)
	I0314 19:46:34.295101 1000751 main.go:141] libmachine: (bridge-058224) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544/.minikube (perms=drwxr-xr-x)
	I0314 19:46:34.295111 1000751 main.go:141] libmachine: (bridge-058224) Setting executable bit set on /home/jenkins/minikube-integration/18384-942544 (perms=drwxrwxr-x)
	I0314 19:46:34.295133 1000751 main.go:141] libmachine: (bridge-058224) Setting executable bit set on /home/jenkins/minikube-integration (perms=drwxrwxr-x)
	I0314 19:46:34.295149 1000751 main.go:141] libmachine: (bridge-058224) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:46:34.295165 1000751 main.go:141] libmachine: (bridge-058224) DBG | Checking permissions on dir: /home/jenkins/minikube-integration/18384-942544
	I0314 19:46:34.295178 1000751 main.go:141] libmachine: (bridge-058224) DBG | Checking permissions on dir: /home/jenkins/minikube-integration
	I0314 19:46:34.295190 1000751 main.go:141] libmachine: (bridge-058224) DBG | Checking permissions on dir: /home/jenkins
	I0314 19:46:34.295202 1000751 main.go:141] libmachine: (bridge-058224) DBG | Checking permissions on dir: /home
	I0314 19:46:34.295217 1000751 main.go:141] libmachine: (bridge-058224) Setting executable bit set on /home/jenkins (perms=drwxr-xr-x)
	I0314 19:46:34.295232 1000751 main.go:141] libmachine: (bridge-058224) Creating domain...
	I0314 19:46:34.295268 1000751 main.go:141] libmachine: (bridge-058224) DBG | Skipping /home - not owner
	I0314 19:46:34.296448 1000751 main.go:141] libmachine: (bridge-058224) define libvirt domain using xml: 
	I0314 19:46:34.296467 1000751 main.go:141] libmachine: (bridge-058224) <domain type='kvm'>
	I0314 19:46:34.296473 1000751 main.go:141] libmachine: (bridge-058224)   <name>bridge-058224</name>
	I0314 19:46:34.296478 1000751 main.go:141] libmachine: (bridge-058224)   <memory unit='MiB'>3072</memory>
	I0314 19:46:34.296486 1000751 main.go:141] libmachine: (bridge-058224)   <vcpu>2</vcpu>
	I0314 19:46:34.296497 1000751 main.go:141] libmachine: (bridge-058224)   <features>
	I0314 19:46:34.296529 1000751 main.go:141] libmachine: (bridge-058224)     <acpi/>
	I0314 19:46:34.296553 1000751 main.go:141] libmachine: (bridge-058224)     <apic/>
	I0314 19:46:34.296566 1000751 main.go:141] libmachine: (bridge-058224)     <pae/>
	I0314 19:46:34.296576 1000751 main.go:141] libmachine: (bridge-058224)     
	I0314 19:46:34.296587 1000751 main.go:141] libmachine: (bridge-058224)   </features>
	I0314 19:46:34.296599 1000751 main.go:141] libmachine: (bridge-058224)   <cpu mode='host-passthrough'>
	I0314 19:46:34.296608 1000751 main.go:141] libmachine: (bridge-058224)   
	I0314 19:46:34.296615 1000751 main.go:141] libmachine: (bridge-058224)   </cpu>
	I0314 19:46:34.296645 1000751 main.go:141] libmachine: (bridge-058224)   <os>
	I0314 19:46:34.296671 1000751 main.go:141] libmachine: (bridge-058224)     <type>hvm</type>
	I0314 19:46:34.296683 1000751 main.go:141] libmachine: (bridge-058224)     <boot dev='cdrom'/>
	I0314 19:46:34.296698 1000751 main.go:141] libmachine: (bridge-058224)     <boot dev='hd'/>
	I0314 19:46:34.296708 1000751 main.go:141] libmachine: (bridge-058224)     <bootmenu enable='no'/>
	I0314 19:46:34.296718 1000751 main.go:141] libmachine: (bridge-058224)   </os>
	I0314 19:46:34.296725 1000751 main.go:141] libmachine: (bridge-058224)   <devices>
	I0314 19:46:34.296736 1000751 main.go:141] libmachine: (bridge-058224)     <disk type='file' device='cdrom'>
	I0314 19:46:34.296755 1000751 main.go:141] libmachine: (bridge-058224)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/bridge-058224/boot2docker.iso'/>
	I0314 19:46:34.296784 1000751 main.go:141] libmachine: (bridge-058224)       <target dev='hdc' bus='scsi'/>
	I0314 19:46:34.296797 1000751 main.go:141] libmachine: (bridge-058224)       <readonly/>
	I0314 19:46:34.296805 1000751 main.go:141] libmachine: (bridge-058224)     </disk>
	I0314 19:46:34.296818 1000751 main.go:141] libmachine: (bridge-058224)     <disk type='file' device='disk'>
	I0314 19:46:34.296830 1000751 main.go:141] libmachine: (bridge-058224)       <driver name='qemu' type='raw' cache='default' io='threads' />
	I0314 19:46:34.296844 1000751 main.go:141] libmachine: (bridge-058224)       <source file='/home/jenkins/minikube-integration/18384-942544/.minikube/machines/bridge-058224/bridge-058224.rawdisk'/>
	I0314 19:46:34.296856 1000751 main.go:141] libmachine: (bridge-058224)       <target dev='hda' bus='virtio'/>
	I0314 19:46:34.296864 1000751 main.go:141] libmachine: (bridge-058224)     </disk>
	I0314 19:46:34.296873 1000751 main.go:141] libmachine: (bridge-058224)     <interface type='network'>
	I0314 19:46:34.296882 1000751 main.go:141] libmachine: (bridge-058224)       <source network='mk-bridge-058224'/>
	I0314 19:46:34.296892 1000751 main.go:141] libmachine: (bridge-058224)       <model type='virtio'/>
	I0314 19:46:34.296900 1000751 main.go:141] libmachine: (bridge-058224)     </interface>
	I0314 19:46:34.296910 1000751 main.go:141] libmachine: (bridge-058224)     <interface type='network'>
	I0314 19:46:34.296928 1000751 main.go:141] libmachine: (bridge-058224)       <source network='default'/>
	I0314 19:46:34.296944 1000751 main.go:141] libmachine: (bridge-058224)       <model type='virtio'/>
	I0314 19:46:34.296978 1000751 main.go:141] libmachine: (bridge-058224)     </interface>
	I0314 19:46:34.297001 1000751 main.go:141] libmachine: (bridge-058224)     <serial type='pty'>
	I0314 19:46:34.297014 1000751 main.go:141] libmachine: (bridge-058224)       <target port='0'/>
	I0314 19:46:34.297021 1000751 main.go:141] libmachine: (bridge-058224)     </serial>
	I0314 19:46:34.297031 1000751 main.go:141] libmachine: (bridge-058224)     <console type='pty'>
	I0314 19:46:34.297049 1000751 main.go:141] libmachine: (bridge-058224)       <target type='serial' port='0'/>
	I0314 19:46:34.297057 1000751 main.go:141] libmachine: (bridge-058224)     </console>
	I0314 19:46:34.297064 1000751 main.go:141] libmachine: (bridge-058224)     <rng model='virtio'>
	I0314 19:46:34.297075 1000751 main.go:141] libmachine: (bridge-058224)       <backend model='random'>/dev/random</backend>
	I0314 19:46:34.297082 1000751 main.go:141] libmachine: (bridge-058224)     </rng>
	I0314 19:46:34.297091 1000751 main.go:141] libmachine: (bridge-058224)     
	I0314 19:46:34.297095 1000751 main.go:141] libmachine: (bridge-058224)     
	I0314 19:46:34.297100 1000751 main.go:141] libmachine: (bridge-058224)   </devices>
	I0314 19:46:34.297105 1000751 main.go:141] libmachine: (bridge-058224) </domain>
	I0314 19:46:34.297113 1000751 main.go:141] libmachine: (bridge-058224) 
	I0314 19:46:34.302482 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:5f:df:fc in network default
	I0314 19:46:34.303203 1000751 main.go:141] libmachine: (bridge-058224) Ensuring networks are active...
	I0314 19:46:34.303234 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:34:2e:36 in network mk-bridge-058224
	I0314 19:46:34.303897 1000751 main.go:141] libmachine: (bridge-058224) Ensuring network default is active
	I0314 19:46:34.304255 1000751 main.go:141] libmachine: (bridge-058224) Ensuring network mk-bridge-058224 is active
	I0314 19:46:34.304755 1000751 main.go:141] libmachine: (bridge-058224) Getting domain xml...
	I0314 19:46:34.305714 1000751 main.go:141] libmachine: (bridge-058224) Creating domain...
	I0314 19:46:35.732921 1000751 main.go:141] libmachine: (bridge-058224) Waiting to get IP...
	I0314 19:46:35.733960 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:34:2e:36 in network mk-bridge-058224
	I0314 19:46:35.734606 1000751 main.go:141] libmachine: (bridge-058224) DBG | unable to find current IP address of domain bridge-058224 in network mk-bridge-058224
	I0314 19:46:35.734639 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:35.734503 1000775 retry.go:31] will retry after 202.256456ms: waiting for machine to come up
	I0314 19:46:35.939313 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:34:2e:36 in network mk-bridge-058224
	I0314 19:46:35.939971 1000751 main.go:141] libmachine: (bridge-058224) DBG | unable to find current IP address of domain bridge-058224 in network mk-bridge-058224
	I0314 19:46:35.940001 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:35.939878 1000775 retry.go:31] will retry after 265.792323ms: waiting for machine to come up
	I0314 19:46:36.207704 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:34:2e:36 in network mk-bridge-058224
	I0314 19:46:36.208339 1000751 main.go:141] libmachine: (bridge-058224) DBG | unable to find current IP address of domain bridge-058224 in network mk-bridge-058224
	I0314 19:46:36.208367 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:36.208263 1000775 retry.go:31] will retry after 326.123248ms: waiting for machine to come up
	I0314 19:46:36.535776 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:34:2e:36 in network mk-bridge-058224
	I0314 19:46:36.536469 1000751 main.go:141] libmachine: (bridge-058224) DBG | unable to find current IP address of domain bridge-058224 in network mk-bridge-058224
	I0314 19:46:36.536500 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:36.536417 1000775 retry.go:31] will retry after 383.043387ms: waiting for machine to come up
	I0314 19:46:36.921185 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:34:2e:36 in network mk-bridge-058224
	I0314 19:46:36.921875 1000751 main.go:141] libmachine: (bridge-058224) DBG | unable to find current IP address of domain bridge-058224 in network mk-bridge-058224
	I0314 19:46:36.921901 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:36.921836 1000775 retry.go:31] will retry after 738.378392ms: waiting for machine to come up
	I0314 19:46:37.661929 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:34:2e:36 in network mk-bridge-058224
	I0314 19:46:37.662478 1000751 main.go:141] libmachine: (bridge-058224) DBG | unable to find current IP address of domain bridge-058224 in network mk-bridge-058224
	I0314 19:46:37.662506 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:37.662431 1000775 retry.go:31] will retry after 821.898331ms: waiting for machine to come up
	I0314 19:46:38.486501 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:34:2e:36 in network mk-bridge-058224
	I0314 19:46:38.487035 1000751 main.go:141] libmachine: (bridge-058224) DBG | unable to find current IP address of domain bridge-058224 in network mk-bridge-058224
	I0314 19:46:38.487095 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:38.486993 1000775 retry.go:31] will retry after 813.252624ms: waiting for machine to come up
	I0314 19:46:37.837481  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:38.337615  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:38.837214  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:39.337137  998484 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:46:39.544483  998484 kubeadm.go:1106] duration metric: took 11.855379623s to wait for elevateKubeSystemPrivileges
	W0314 19:46:39.544530  998484 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:46:39.544540  998484 kubeadm.go:393] duration metric: took 24.781575132s to StartCluster
	I0314 19:46:39.544564  998484 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:46:39.544650  998484 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:46:39.546150  998484 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:46:39.546422  998484 start.go:234] Will wait 15m0s for node &{Name: IP:192.168.50.208 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:46:39.547920  998484 out.go:177] * Verifying Kubernetes components...
	I0314 19:46:39.546576  998484 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml"
	I0314 19:46:39.546596  998484 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:false nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:46:39.546842  998484 config.go:182] Loaded profile config "flannel-058224": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:46:39.549138  998484 addons.go:69] Setting storage-provisioner=true in profile "flannel-058224"
	I0314 19:46:39.549173  998484 addons.go:234] Setting addon storage-provisioner=true in "flannel-058224"
	I0314 19:46:39.549210  998484 host.go:66] Checking if "flannel-058224" exists ...
	I0314 19:46:39.549218  998484 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:46:39.549273  998484 addons.go:69] Setting default-storageclass=true in profile "flannel-058224"
	I0314 19:46:39.549312  998484 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "flannel-058224"
	I0314 19:46:39.549655  998484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:46:39.549673  998484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:46:39.549698  998484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:46:39.549709  998484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:46:39.570474  998484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34929
	I0314 19:46:39.570849  998484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37669
	I0314 19:46:39.571061  998484 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:46:39.571464  998484 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:46:39.571697  998484 main.go:141] libmachine: Using API Version  1
	I0314 19:46:39.571715  998484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:46:39.572120  998484 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:46:39.572318  998484 main.go:141] libmachine: Using API Version  1
	I0314 19:46:39.572340  998484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:46:39.572364  998484 main.go:141] libmachine: (flannel-058224) Calling .GetState
	I0314 19:46:39.572682  998484 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:46:39.573279  998484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:46:39.573338  998484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:46:39.576249  998484 addons.go:234] Setting addon default-storageclass=true in "flannel-058224"
	I0314 19:46:39.576296  998484 host.go:66] Checking if "flannel-058224" exists ...
	I0314 19:46:39.576707  998484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:46:39.576746  998484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:46:39.594493  998484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33931
	I0314 19:46:39.595083  998484 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:46:39.595977  998484 main.go:141] libmachine: Using API Version  1
	I0314 19:46:39.596005  998484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:46:39.596449  998484 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:46:39.596660  998484 main.go:141] libmachine: (flannel-058224) Calling .GetState
	I0314 19:46:39.600003  998484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39523
	I0314 19:46:39.600766  998484 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:46:39.601377  998484 main.go:141] libmachine: Using API Version  1
	I0314 19:46:39.601399  998484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:46:39.601522  998484 main.go:141] libmachine: (flannel-058224) Calling .DriverName
	I0314 19:46:39.603537  998484 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:46:39.602000  998484 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:46:39.604259  998484 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:46:39.605350  998484 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:46:39.605356  998484 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:46:39.605368  998484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:46:39.605388  998484 main.go:141] libmachine: (flannel-058224) Calling .GetSSHHostname
	I0314 19:46:39.609498  998484 main.go:141] libmachine: (flannel-058224) DBG | domain flannel-058224 has defined MAC address 52:54:00:9b:29:2e in network mk-flannel-058224
	I0314 19:46:39.609671  998484 main.go:141] libmachine: (flannel-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:29:2e", ip: ""} in network mk-flannel-058224: {Iface:virbr2 ExpiryTime:2024-03-14 20:45:54 +0000 UTC Type:0 Mac:52:54:00:9b:29:2e Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:flannel-058224 Clientid:01:52:54:00:9b:29:2e}
	I0314 19:46:39.609713  998484 main.go:141] libmachine: (flannel-058224) DBG | domain flannel-058224 has defined IP address 192.168.50.208 and MAC address 52:54:00:9b:29:2e in network mk-flannel-058224
	I0314 19:46:39.609865  998484 main.go:141] libmachine: (flannel-058224) Calling .GetSSHPort
	I0314 19:46:39.610241  998484 main.go:141] libmachine: (flannel-058224) Calling .GetSSHKeyPath
	I0314 19:46:39.610416  998484 main.go:141] libmachine: (flannel-058224) Calling .GetSSHUsername
	I0314 19:46:39.610562  998484 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/flannel-058224/id_rsa Username:docker}
	I0314 19:46:39.626228  998484 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45397
	I0314 19:46:39.626884  998484 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:46:39.627505  998484 main.go:141] libmachine: Using API Version  1
	I0314 19:46:39.627534  998484 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:46:39.627935  998484 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:46:39.628138  998484 main.go:141] libmachine: (flannel-058224) Calling .GetState
	I0314 19:46:39.630198  998484 main.go:141] libmachine: (flannel-058224) Calling .DriverName
	I0314 19:46:39.630539  998484 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:46:39.630561  998484 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:46:39.630581  998484 main.go:141] libmachine: (flannel-058224) Calling .GetSSHHostname
	I0314 19:46:39.634226  998484 main.go:141] libmachine: (flannel-058224) DBG | domain flannel-058224 has defined MAC address 52:54:00:9b:29:2e in network mk-flannel-058224
	I0314 19:46:39.634749  998484 main.go:141] libmachine: (flannel-058224) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:9b:29:2e", ip: ""} in network mk-flannel-058224: {Iface:virbr2 ExpiryTime:2024-03-14 20:45:54 +0000 UTC Type:0 Mac:52:54:00:9b:29:2e Iaid: IPaddr:192.168.50.208 Prefix:24 Hostname:flannel-058224 Clientid:01:52:54:00:9b:29:2e}
	I0314 19:46:39.634779  998484 main.go:141] libmachine: (flannel-058224) DBG | domain flannel-058224 has defined IP address 192.168.50.208 and MAC address 52:54:00:9b:29:2e in network mk-flannel-058224
	I0314 19:46:39.635103  998484 main.go:141] libmachine: (flannel-058224) Calling .GetSSHPort
	I0314 19:46:39.635316  998484 main.go:141] libmachine: (flannel-058224) Calling .GetSSHKeyPath
	I0314 19:46:39.635509  998484 main.go:141] libmachine: (flannel-058224) Calling .GetSSHUsername
	I0314 19:46:39.635667  998484 sshutil.go:53] new ssh client: &{IP:192.168.50.208 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/flannel-058224/id_rsa Username:docker}
	I0314 19:46:39.949290  998484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:46:39.968327  998484 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:46:40.029799  998484 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:46:40.030049  998484 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -"
	I0314 19:46:40.762548  999128 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.460607012s)
	I0314 19:46:40.762584  999128 crio.go:451] duration metric: took 3.46076043s to extract the tarball
	I0314 19:46:40.762594  999128 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:46:40.822256  999128 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:46:40.874658  999128 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:46:40.874690  999128 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:46:40.874701  999128 kubeadm.go:928] updating node { 192.168.72.199 8443 v1.28.4 crio true true} ...
	I0314 19:46:40.874871  999128 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=enable-default-cni-058224 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.72.199
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:enable-default-cni-058224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge}
	I0314 19:46:40.874971  999128 ssh_runner.go:195] Run: crio config
	I0314 19:46:40.931263  999128 cni.go:84] Creating CNI manager for "bridge"
	I0314 19:46:40.931296  999128 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:46:40.931325  999128 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.199 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:enable-default-cni-058224 NodeName:enable-default-cni-058224 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.199"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.199 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.
crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:46:40.931573  999128 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.199
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "enable-default-cni-058224"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.199
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.199"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:46:40.931660  999128 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:46:40.945100  999128 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:46:40.945166  999128 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:46:40.957860  999128 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (325 bytes)
	I0314 19:46:40.981684  999128 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:46:41.002266  999128 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0314 19:46:41.022347  999128 ssh_runner.go:195] Run: grep 192.168.72.199	control-plane.minikube.internal$ /etc/hosts
	I0314 19:46:41.028405  999128 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.199	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:46:41.045307  999128 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:46:41.204541  999128 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:46:41.230999  999128 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224 for IP: 192.168.72.199
	I0314 19:46:41.231023  999128 certs.go:194] generating shared ca certs ...
	I0314 19:46:41.231044  999128 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:46:41.231238  999128 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:46:41.231312  999128 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:46:41.231327  999128 certs.go:256] generating profile certs ...
	I0314 19:46:41.231402  999128 certs.go:363] generating signed profile cert for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/client.key
	I0314 19:46:41.231422  999128 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/client.crt with IP's: []
	I0314 19:46:41.370670  999128 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/client.crt ...
	I0314 19:46:41.370705  999128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/client.crt: {Name:mk3fbc6d3f933c69e4b7a3e6713a8e94884f48d3 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:46:41.370881  999128 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/client.key ...
	I0314 19:46:41.370899  999128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/client.key: {Name:mk25bd9a364da4af042702d47c872e67a1fe2e96 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:46:41.371003  999128 certs.go:363] generating signed profile cert for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.key.43ab9d21
	I0314 19:46:41.371024  999128 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.crt.43ab9d21 with IP's: [10.96.0.1 127.0.0.1 10.0.0.1 192.168.72.199]
	I0314 19:46:41.521360  999128 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.crt.43ab9d21 ...
	I0314 19:46:41.521396  999128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.crt.43ab9d21: {Name:mk83aad304189194006a3e22fae50397f262837d Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:46:41.521566  999128 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.key.43ab9d21 ...
	I0314 19:46:41.521583  999128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.key.43ab9d21: {Name:mke565be71904338f37e468a0a7bd7ff531570bf Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:46:41.521689  999128 certs.go:381] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.crt.43ab9d21 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.crt
	I0314 19:46:41.521803  999128 certs.go:385] copying /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.key.43ab9d21 -> /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.key
	I0314 19:46:41.521892  999128 certs.go:363] generating signed profile cert for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/proxy-client.key
	I0314 19:46:41.521915  999128 crypto.go:68] Generating cert /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/proxy-client.crt with IP's: []
	I0314 19:46:41.708031  999128 crypto.go:156] Writing cert to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/proxy-client.crt ...
	I0314 19:46:41.708065  999128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/proxy-client.crt: {Name:mkd1671b441803560aed174143097067a22717ff Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:46:41.708289  999128 crypto.go:164] Writing key to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/proxy-client.key ...
	I0314 19:46:41.708317  999128 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/proxy-client.key: {Name:mk08bdcd7aee8f44686ef6f71c2b84f916dc901b Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:46:41.708501  999128 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:46:41.708542  999128 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:46:41.708558  999128 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:46:41.708577  999128 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:46:41.708602  999128 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:46:41.708623  999128 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:46:41.708662  999128 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:46:41.709321  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:46:41.748580  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:46:41.785136  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:46:41.819474  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:46:41.849016  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1436 bytes)
	I0314 19:46:41.878745  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:46:41.936459  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:46:41.997093  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/enable-default-cni-058224/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:46:42.030928  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:46:42.060918  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:46:42.093534  999128 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:46:42.126718  999128 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:46:42.150575  999128 ssh_runner.go:195] Run: openssl version
	I0314 19:46:42.157556  999128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:46:42.171554  999128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:46:42.177136  999128 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:46:42.177188  999128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:46:42.185600  999128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:46:42.201297  999128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:46:42.213985  999128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:46:42.219801  999128 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:46:42.219867  999128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:46:42.226671  999128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:46:42.238728  999128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:46:42.252075  999128 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:46:42.257611  999128 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:46:42.257680  999128 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:46:42.264048  999128 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:46:42.276315  999128 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:46:42.281073  999128 certs.go:399] 'apiserver-kubelet-client' cert doesn't exist, likely first start: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/var/lib/minikube/certs/apiserver-kubelet-client.crt': No such file or directory
	I0314 19:46:42.281132  999128 kubeadm.go:391] StartCluster: {Name:enable-default-cni-058224 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:3072 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersi
on:v1.28.4 ClusterName:enable-default-cni-058224 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:bridge} Nodes:[{Name: IP:192.168.72.199 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:15m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountO
ptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:46:42.281237  999128 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:46:42.281289  999128 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:46:42.326868  999128 cri.go:89] found id: ""
	I0314 19:46:42.326937  999128 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	I0314 19:46:42.339128  999128 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:46:42.351700  999128 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:46:42.363550  999128 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:46:42.363572  999128 kubeadm.go:156] found existing configuration files:
	
	I0314 19:46:42.363636  999128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:46:42.377506  999128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:46:42.377579  999128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:46:42.391947  999128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:46:42.403116  999128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:46:42.403181  999128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:46:42.418454  999128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:46:42.432163  999128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:46:42.432242  999128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:46:42.447711  999128 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:46:42.460700  999128 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:46:42.460781  999128 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:46:42.471530  999128 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:46:42.529467  999128 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:46:42.529713  999128 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:46:42.686212  999128 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:46:42.686315  999128 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:46:42.686475  999128 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:46:42.984883  999128 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:46:43.002365  999128 out.go:204]   - Generating certificates and keys ...
	I0314 19:46:43.002479  999128 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:46:43.002612  999128 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:46:43.060182  998484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (3.091738465s)
	I0314 19:46:43.060269  998484 ssh_runner.go:235] Completed: sudo systemctl start kubelet: (3.03042733s)
	I0314 19:46:43.060277  998484 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (3.110944165s)
	I0314 19:46:43.060327  998484 ssh_runner.go:235] Completed: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig -n kube-system get configmap coredns -o yaml | sed -e '/^        forward . \/etc\/resolv.conf.*/i \        hosts {\n           192.168.50.1 host.minikube.internal\n           fallthrough\n        }' -e '/^        errors *$/i \        log' | sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig replace -f -": (3.030245767s)
	I0314 19:46:43.060335  998484 main.go:141] libmachine: Making call to close driver server
	I0314 19:46:43.060356  998484 main.go:141] libmachine: (flannel-058224) Calling .Close
	I0314 19:46:43.060352  998484 start.go:948] {"host.minikube.internal": 192.168.50.1} host record injected into CoreDNS's ConfigMap
	I0314 19:46:43.060338  998484 main.go:141] libmachine: Making call to close driver server
	I0314 19:46:43.060436  998484 main.go:141] libmachine: (flannel-058224) Calling .Close
	I0314 19:46:43.060734  998484 main.go:141] libmachine: (flannel-058224) DBG | Closing plugin on server side
	I0314 19:46:43.060901  998484 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:46:43.060930  998484 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:46:43.061079  998484 main.go:141] libmachine: Making call to close driver server
	I0314 19:46:43.061111  998484 main.go:141] libmachine: (flannel-058224) Calling .Close
	I0314 19:46:43.061584  998484 node_ready.go:35] waiting up to 15m0s for node "flannel-058224" to be "Ready" ...
	I0314 19:46:43.061964  998484 main.go:141] libmachine: (flannel-058224) DBG | Closing plugin on server side
	I0314 19:46:43.061970  998484 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:46:43.061989  998484 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:46:43.061988  998484 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:46:43.062002  998484 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:46:43.062011  998484 main.go:141] libmachine: Making call to close driver server
	I0314 19:46:43.062024  998484 main.go:141] libmachine: (flannel-058224) Calling .Close
	I0314 19:46:43.062334  998484 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:46:43.062348  998484 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:46:43.122630  998484 main.go:141] libmachine: Making call to close driver server
	I0314 19:46:43.122739  998484 main.go:141] libmachine: (flannel-058224) Calling .Close
	I0314 19:46:43.123139  998484 main.go:141] libmachine: (flannel-058224) DBG | Closing plugin on server side
	I0314 19:46:43.123177  998484 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:46:43.123210  998484 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:46:43.124758  998484 out.go:177] * Enabled addons: storage-provisioner, default-storageclass
	I0314 19:46:39.301868 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:34:2e:36 in network mk-bridge-058224
	I0314 19:46:39.303497 1000751 main.go:141] libmachine: (bridge-058224) DBG | unable to find current IP address of domain bridge-058224 in network mk-bridge-058224
	I0314 19:46:39.303525 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:39.303444 1000775 retry.go:31] will retry after 1.42408357s: waiting for machine to come up
	I0314 19:46:40.730089 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:34:2e:36 in network mk-bridge-058224
	I0314 19:46:40.730605 1000751 main.go:141] libmachine: (bridge-058224) DBG | unable to find current IP address of domain bridge-058224 in network mk-bridge-058224
	I0314 19:46:40.730638 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:40.730499 1000775 retry.go:31] will retry after 1.26009621s: waiting for machine to come up
	I0314 19:46:41.992975 1000751 main.go:141] libmachine: (bridge-058224) DBG | domain bridge-058224 has defined MAC address 52:54:00:34:2e:36 in network mk-bridge-058224
	I0314 19:46:41.993601 1000751 main.go:141] libmachine: (bridge-058224) DBG | unable to find current IP address of domain bridge-058224 in network mk-bridge-058224
	I0314 19:46:41.993652 1000751 main.go:141] libmachine: (bridge-058224) DBG | I0314 19:46:41.993559 1000775 retry.go:31] will retry after 2.086054329s: waiting for machine to come up
	
	
	==> CRI-O <==
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.294340864Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445605294315470,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=548f781b-cec4-4b5c-95db-8a5bb77f8c05 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.294946517Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=a57fff3a-459b-443b-831d-2965026674c3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.295070314Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=a57fff3a-459b-443b-831d-2965026674c3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.295247212Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18d1cd32af6f4480fa9acae30922cb1716bd501d4d4b630436279aaf2ff32e85,PodSandboxId:5829a9a3b1c6a010bd414489521059562eeba5a38ee841089fcdb7decb148529,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444636939418198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daafd1bc-b1f1-4dab-b615-8364e22f984f,},Annotations:map[string]string{io.kubernetes.container.hash: a381b584,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516979e575152806b7e992fa23fa202f248f9e96eed77d258382809942991f60,PodSandboxId:6994be9d850c8643e713b276f02c18f574d0f385cfcec2b315c26fbb6151ba44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444635152845570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkhfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: b9955dbd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf12f4830373a332e2eef6d488a87bed2e807f68caf517e80ef47633cc1cda,PodSandboxId:dc6b5b0e66cf5ff7159c8e267956d514251cf22ef63d2693914ea964eb304d0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444634938047525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g4dzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9e849b06-74f4-4d8e-95b1-16136db8faee,},Annotations:map[string]string{io.kubernetes.container.hash: cf61d8ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a8fc0140117e11736c5e998cf4ecf4fd65d03db8f29c724e16adab62164d1,PodSandboxId:c04522b95d25f10257363d784ea688705042ac8291ecf3eccf388f6430bdd837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING
,CreatedAt:1710444634276474413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7hdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2e6b4f3-8ba9-4f0a-8e04-b289699b1017,},Annotations:map[string]string{io.kubernetes.container.hash: 25512ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e22802c857e595313e8ffed12f8ea74ae79654da84f7c59fc68dbd38c35da9,PodSandboxId:a057ad8c936229a0acbb1353bead9a62d58af4e080344b06fae36741b8c2039c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444615073202849,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29a6694ff3899537a529b8ebec96a741,},Annotations:map[string]string{io.kubernetes.container.hash: b12ca03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b07559507142186ee81654ae4cf97ce86f061922b12368c9877cbf96b771f,PodSandboxId:2ddea20fe163fe0c210d9ffe265689057dc8930f69f9b430a2478468af1a5bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444615091408892,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e68f66e6b599f1d5cb92b8b9be82039e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e071343bddeb260f6aae138da323d35bb8919bad790bc11ab42c79b999b5c8c8,PodSandboxId:26576956c83ee2d66e0fe60a9d164102e533d5524d572422fb6d54ba395e4322,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444615009347147,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a372661903be7fb35d52cbda0251c8,},Annotations:map[string]string{io.kubernetes.container.hash: 924a1e92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978f9b7e919fb85a862ca88deab99fb2dc10c7e28a3b187f958108dc1e972295,PodSandboxId:1aace104961e9a73a09cb825adb5f5064b06f1e0b9cf34c303fccb0d3135441a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444615003763952,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0131674def7e568083bac27c383f9e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=a57fff3a-459b-443b-831d-2965026674c3 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.356701481Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=b08d2b84-fbbf-42d7-9e65-99192801f046 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.356814276Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=b08d2b84-fbbf-42d7-9e65-99192801f046 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.362178736Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=cf24e2be-8f93-42d2-bf16-479017af4e00 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.362578021Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445605362550640,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=cf24e2be-8f93-42d2-bf16-479017af4e00 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.363714438Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=4a7e1512-2cd9-446f-b35c-ab2fb5f3e489 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.363785297Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=4a7e1512-2cd9-446f-b35c-ab2fb5f3e489 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.364451151Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18d1cd32af6f4480fa9acae30922cb1716bd501d4d4b630436279aaf2ff32e85,PodSandboxId:5829a9a3b1c6a010bd414489521059562eeba5a38ee841089fcdb7decb148529,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444636939418198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daafd1bc-b1f1-4dab-b615-8364e22f984f,},Annotations:map[string]string{io.kubernetes.container.hash: a381b584,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516979e575152806b7e992fa23fa202f248f9e96eed77d258382809942991f60,PodSandboxId:6994be9d850c8643e713b276f02c18f574d0f385cfcec2b315c26fbb6151ba44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444635152845570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkhfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: b9955dbd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf12f4830373a332e2eef6d488a87bed2e807f68caf517e80ef47633cc1cda,PodSandboxId:dc6b5b0e66cf5ff7159c8e267956d514251cf22ef63d2693914ea964eb304d0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444634938047525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g4dzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9e849b06-74f4-4d8e-95b1-16136db8faee,},Annotations:map[string]string{io.kubernetes.container.hash: cf61d8ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a8fc0140117e11736c5e998cf4ecf4fd65d03db8f29c724e16adab62164d1,PodSandboxId:c04522b95d25f10257363d784ea688705042ac8291ecf3eccf388f6430bdd837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING
,CreatedAt:1710444634276474413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7hdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2e6b4f3-8ba9-4f0a-8e04-b289699b1017,},Annotations:map[string]string{io.kubernetes.container.hash: 25512ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e22802c857e595313e8ffed12f8ea74ae79654da84f7c59fc68dbd38c35da9,PodSandboxId:a057ad8c936229a0acbb1353bead9a62d58af4e080344b06fae36741b8c2039c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444615073202849,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29a6694ff3899537a529b8ebec96a741,},Annotations:map[string]string{io.kubernetes.container.hash: b12ca03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b07559507142186ee81654ae4cf97ce86f061922b12368c9877cbf96b771f,PodSandboxId:2ddea20fe163fe0c210d9ffe265689057dc8930f69f9b430a2478468af1a5bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444615091408892,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e68f66e6b599f1d5cb92b8b9be82039e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e071343bddeb260f6aae138da323d35bb8919bad790bc11ab42c79b999b5c8c8,PodSandboxId:26576956c83ee2d66e0fe60a9d164102e533d5524d572422fb6d54ba395e4322,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444615009347147,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a372661903be7fb35d52cbda0251c8,},Annotations:map[string]string{io.kubernetes.container.hash: 924a1e92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978f9b7e919fb85a862ca88deab99fb2dc10c7e28a3b187f958108dc1e972295,PodSandboxId:1aace104961e9a73a09cb825adb5f5064b06f1e0b9cf34c303fccb0d3135441a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444615003763952,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0131674def7e568083bac27c383f9e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=4a7e1512-2cd9-446f-b35c-ab2fb5f3e489 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.420419239Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=0cdf021d-b0fd-489f-90a6-25db98cf2552 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.420521055Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=0cdf021d-b0fd-489f-90a6-25db98cf2552 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.422956118Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=7d40badb-f804-47ad-9463-96ef17ca35a9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.424199793Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445605424158019,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=7d40badb-f804-47ad-9463-96ef17ca35a9 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.425244646Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=71f0e26f-ecb9-4b65-b262-c2243f2186f8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.425358349Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=71f0e26f-ecb9-4b65-b262-c2243f2186f8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.425582258Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18d1cd32af6f4480fa9acae30922cb1716bd501d4d4b630436279aaf2ff32e85,PodSandboxId:5829a9a3b1c6a010bd414489521059562eeba5a38ee841089fcdb7decb148529,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444636939418198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daafd1bc-b1f1-4dab-b615-8364e22f984f,},Annotations:map[string]string{io.kubernetes.container.hash: a381b584,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516979e575152806b7e992fa23fa202f248f9e96eed77d258382809942991f60,PodSandboxId:6994be9d850c8643e713b276f02c18f574d0f385cfcec2b315c26fbb6151ba44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444635152845570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkhfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: b9955dbd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf12f4830373a332e2eef6d488a87bed2e807f68caf517e80ef47633cc1cda,PodSandboxId:dc6b5b0e66cf5ff7159c8e267956d514251cf22ef63d2693914ea964eb304d0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444634938047525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g4dzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9e849b06-74f4-4d8e-95b1-16136db8faee,},Annotations:map[string]string{io.kubernetes.container.hash: cf61d8ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a8fc0140117e11736c5e998cf4ecf4fd65d03db8f29c724e16adab62164d1,PodSandboxId:c04522b95d25f10257363d784ea688705042ac8291ecf3eccf388f6430bdd837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING
,CreatedAt:1710444634276474413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7hdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2e6b4f3-8ba9-4f0a-8e04-b289699b1017,},Annotations:map[string]string{io.kubernetes.container.hash: 25512ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e22802c857e595313e8ffed12f8ea74ae79654da84f7c59fc68dbd38c35da9,PodSandboxId:a057ad8c936229a0acbb1353bead9a62d58af4e080344b06fae36741b8c2039c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444615073202849,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29a6694ff3899537a529b8ebec96a741,},Annotations:map[string]string{io.kubernetes.container.hash: b12ca03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b07559507142186ee81654ae4cf97ce86f061922b12368c9877cbf96b771f,PodSandboxId:2ddea20fe163fe0c210d9ffe265689057dc8930f69f9b430a2478468af1a5bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444615091408892,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e68f66e6b599f1d5cb92b8b9be82039e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e071343bddeb260f6aae138da323d35bb8919bad790bc11ab42c79b999b5c8c8,PodSandboxId:26576956c83ee2d66e0fe60a9d164102e533d5524d572422fb6d54ba395e4322,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444615009347147,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a372661903be7fb35d52cbda0251c8,},Annotations:map[string]string{io.kubernetes.container.hash: 924a1e92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978f9b7e919fb85a862ca88deab99fb2dc10c7e28a3b187f958108dc1e972295,PodSandboxId:1aace104961e9a73a09cb825adb5f5064b06f1e0b9cf34c303fccb0d3135441a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444615003763952,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0131674def7e568083bac27c383f9e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=71f0e26f-ecb9-4b65-b262-c2243f2186f8 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.481711143Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=63ff0fbf-950a-42a7-b0d1-86d00580ec1b name=/runtime.v1.RuntimeService/Version
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.481815165Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=63ff0fbf-950a-42a7-b0d1-86d00580ec1b name=/runtime.v1.RuntimeService/Version
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.483497494Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=507ffba8-11ff-4ec0-859f-878a204f7536 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.484349077Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445605484314249,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:125153,},InodesUsed:&UInt64Value{Value:62,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=507ffba8-11ff-4ec0-859f-878a204f7536 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.485351741Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=0fa0686b-2740-4158-b02f-7f10c53f5a15 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.485423267Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=0fa0686b-2740-4158-b02f-7f10c53f5a15 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:46:45 default-k8s-diff-port-440341 crio[692]: time="2024-03-14 19:46:45.485664789Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{&Container{Id:18d1cd32af6f4480fa9acae30922cb1716bd501d4d4b630436279aaf2ff32e85,PodSandboxId:5829a9a3b1c6a010bd414489521059562eeba5a38ee841089fcdb7decb148529,Metadata:&ContainerMetadata{Name:storage-provisioner,Attempt:0,},Image:&ImageSpec{Image:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562,State:CONTAINER_RUNNING,CreatedAt:1710444636939418198,Labels:map[string]string{io.kubernetes.container.name: storage-provisioner,io.kubernetes.pod.name: storage-provisioner,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: daafd1bc-b1f1-4dab-b615-8364e22f984f,},Annotations:map[string]string{io.kubernetes.container.hash: a381b584,io.kubernetes.container.restartCount: 0,io.kube
rnetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:516979e575152806b7e992fa23fa202f248f9e96eed77d258382809942991f60,PodSandboxId:6994be9d850c8643e713b276f02c18f574d0f385cfcec2b315c26fbb6151ba44,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444635152845570,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-qkhfs,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8,},Annotations:map[string]string{io.kubernetes.container.hash: b9955dbd,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"proto
col\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2cf12f4830373a332e2eef6d488a87bed2e807f68caf517e80ef47633cc1cda,PodSandboxId:dc6b5b0e66cf5ff7159c8e267956d514251cf22ef63d2693914ea964eb304d0f,Metadata:&ContainerMetadata{Name:coredns,Attempt:0,},Image:&ImageSpec{Image:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc,State:CONTAINER_RUNNING,CreatedAt:1710444634938047525,Labels:map[string]string{io.kubernetes.container.name: coredns,io.kubernetes.pod.name: coredns-5dd5756b68-g4dzq,io.kubernetes.pod.namespace: kube-system,io.kubernetes.
pod.uid: 9e849b06-74f4-4d8e-95b1-16136db8faee,},Annotations:map[string]string{io.kubernetes.container.hash: cf61d8ac,io.kubernetes.container.ports: [{\"name\":\"dns\",\"containerPort\":53,\"protocol\":\"UDP\"},{\"name\":\"dns-tcp\",\"containerPort\":53,\"protocol\":\"TCP\"},{\"name\":\"metrics\",\"containerPort\":9153,\"protocol\":\"TCP\"}],io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:b90a8fc0140117e11736c5e998cf4ecf4fd65d03db8f29c724e16adab62164d1,PodSandboxId:c04522b95d25f10257363d784ea688705042ac8291ecf3eccf388f6430bdd837,Metadata:&ContainerMetadata{Name:kube-proxy,Attempt:0,},Image:&ImageSpec{Image:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e,State:CONTAINER_RUNNING
,CreatedAt:1710444634276474413,Labels:map[string]string{io.kubernetes.container.name: kube-proxy,io.kubernetes.pod.name: kube-proxy-h7hdc,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e2e6b4f3-8ba9-4f0a-8e04-b289699b1017,},Annotations:map[string]string{io.kubernetes.container.hash: 25512ac8,io.kubernetes.container.restartCount: 0,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:f2e22802c857e595313e8ffed12f8ea74ae79654da84f7c59fc68dbd38c35da9,PodSandboxId:a057ad8c936229a0acbb1353bead9a62d58af4e080344b06fae36741b8c2039c,Metadata:&ContainerMetadata{Name:etcd,Attempt:2,},Image:&ImageSpec{Image:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9,State:CONTAINER_RUNNING,CreatedAt:1710444615073202849,Labels:m
ap[string]string{io.kubernetes.container.name: etcd,io.kubernetes.pod.name: etcd-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: 29a6694ff3899537a529b8ebec96a741,},Annotations:map[string]string{io.kubernetes.container.hash: b12ca03,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:093b07559507142186ee81654ae4cf97ce86f061922b12368c9877cbf96b771f,PodSandboxId:2ddea20fe163fe0c210d9ffe265689057dc8930f69f9b430a2478468af1a5bef,Metadata:&ContainerMetadata{Name:kube-scheduler,Attempt:2,},Image:&ImageSpec{Image:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1,State:CONTAINER_RUNNING,CreatedAt:1710444615091408892,Labels:map[string]string{io.kub
ernetes.container.name: kube-scheduler,io.kubernetes.pod.name: kube-scheduler-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: e68f66e6b599f1d5cb92b8b9be82039e,},Annotations:map[string]string{io.kubernetes.container.hash: e1639c7a,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:e071343bddeb260f6aae138da323d35bb8919bad790bc11ab42c79b999b5c8c8,PodSandboxId:26576956c83ee2d66e0fe60a9d164102e533d5524d572422fb6d54ba395e4322,Metadata:&ContainerMetadata{Name:kube-apiserver,Attempt:2,},Image:&ImageSpec{Image:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257,State:CONTAINER_RUNNING,CreatedAt:1710444615009347147,Labels:map[string]string{io.kuber
netes.container.name: kube-apiserver,io.kubernetes.pod.name: kube-apiserver-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: c8a372661903be7fb35d52cbda0251c8,},Annotations:map[string]string{io.kubernetes.container.hash: 924a1e92,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},&Container{Id:978f9b7e919fb85a862ca88deab99fb2dc10c7e28a3b187f958108dc1e972295,PodSandboxId:1aace104961e9a73a09cb825adb5f5064b06f1e0b9cf34c303fccb0d3135441a,Metadata:&ContainerMetadata{Name:kube-controller-manager,Attempt:2,},Image:&ImageSpec{Image:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,Annotations:map[string]string{},UserSpecifiedImage:,RuntimeHandler:,},ImageRef:d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591,State:CONTAINER_RUNNING,CreatedAt:1710444615003763952,Labels:map[string]string{i
o.kubernetes.container.name: kube-controller-manager,io.kubernetes.pod.name: kube-controller-manager-default-k8s-diff-port-440341,io.kubernetes.pod.namespace: kube-system,io.kubernetes.pod.uid: d0131674def7e568083bac27c383f9e4,},Annotations:map[string]string{io.kubernetes.container.hash: 4b9c51fc,io.kubernetes.container.restartCount: 2,io.kubernetes.container.terminationMessagePath: /dev/termination-log,io.kubernetes.container.terminationMessagePolicy: File,io.kubernetes.pod.terminationGracePeriod: 30,},},},}" file="otel-collector/interceptors.go:74" id=0fa0686b-2740-4158-b02f-7f10c53f5a15 name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE                                                              CREATED             STATE               NAME                      ATTEMPT             POD ID              POD
	18d1cd32af6f4       6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562   16 minutes ago      Running             storage-provisioner       0                   5829a9a3b1c6a       storage-provisioner
	516979e575152       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   6994be9d850c8       coredns-5dd5756b68-qkhfs
	f2cf12f483037       ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc   16 minutes ago      Running             coredns                   0                   dc6b5b0e66cf5       coredns-5dd5756b68-g4dzq
	b90a8fc014011       83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e   16 minutes ago      Running             kube-proxy                0                   c04522b95d25f       kube-proxy-h7hdc
	093b075595071       e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1   16 minutes ago      Running             kube-scheduler            2                   2ddea20fe163f       kube-scheduler-default-k8s-diff-port-440341
	f2e22802c857e       73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9   16 minutes ago      Running             etcd                      2                   a057ad8c93622       etcd-default-k8s-diff-port-440341
	e071343bddeb2       7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257   16 minutes ago      Running             kube-apiserver            2                   26576956c83ee       kube-apiserver-default-k8s-diff-port-440341
	978f9b7e919fb       d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591   16 minutes ago      Running             kube-controller-manager   2                   1aace104961e9       kube-controller-manager-default-k8s-diff-port-440341
	
	
	==> coredns [516979e575152806b7e992fa23fa202f248f9e96eed77d258382809942991f60] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> coredns [f2cf12f4830373a332e2eef6d488a87bed2e807f68caf517e80ef47633cc1cda] <==
	.:53
	[INFO] plugin/reload: Running configuration SHA512 = 591cf328cccc12bc490481273e738df59329c62c0b729d94e8b61db9961c2fa5f046dd37f1cf888b953814040d180f52594972691cd6ff41be96639138a43908
	CoreDNS-1.10.1
	linux/amd64, go1.20, 055b2c3
	
	
	==> describe nodes <==
	Name:               default-k8s-diff-port-440341
	Roles:              control-plane
	Labels:             beta.kubernetes.io/arch=amd64
	                    beta.kubernetes.io/os=linux
	                    kubernetes.io/arch=amd64
	                    kubernetes.io/hostname=default-k8s-diff-port-440341
	                    kubernetes.io/os=linux
	                    minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7
	                    minikube.k8s.io/name=default-k8s-diff-port-440341
	                    minikube.k8s.io/primary=true
	                    minikube.k8s.io/updated_at=2024_03_14T19_30_21_0700
	                    minikube.k8s.io/version=v1.32.0
	                    node-role.kubernetes.io/control-plane=
	                    node.kubernetes.io/exclude-from-external-load-balancers=
	Annotations:        kubeadm.alpha.kubernetes.io/cri-socket: unix:///var/run/crio/crio.sock
	                    node.alpha.kubernetes.io/ttl: 0
	                    volumes.kubernetes.io/controller-managed-attach-detach: true
	CreationTimestamp:  Thu, 14 Mar 2024 19:30:18 +0000
	Taints:             <none>
	Unschedulable:      false
	Lease:
	  HolderIdentity:  default-k8s-diff-port-440341
	  AcquireTime:     <unset>
	  RenewTime:       Thu, 14 Mar 2024 19:46:43 +0000
	Conditions:
	  Type             Status  LastHeartbeatTime                 LastTransitionTime                Reason                       Message
	  ----             ------  -----------------                 ------------------                ------                       -------
	  MemoryPressure   False   Thu, 14 Mar 2024 19:46:00 +0000   Thu, 14 Mar 2024 19:30:15 +0000   KubeletHasSufficientMemory   kubelet has sufficient memory available
	  DiskPressure     False   Thu, 14 Mar 2024 19:46:00 +0000   Thu, 14 Mar 2024 19:30:15 +0000   KubeletHasNoDiskPressure     kubelet has no disk pressure
	  PIDPressure      False   Thu, 14 Mar 2024 19:46:00 +0000   Thu, 14 Mar 2024 19:30:15 +0000   KubeletHasSufficientPID      kubelet has sufficient PID available
	  Ready            True    Thu, 14 Mar 2024 19:46:00 +0000   Thu, 14 Mar 2024 19:30:18 +0000   KubeletReady                 kubelet is posting ready status
	Addresses:
	  InternalIP:  192.168.61.88
	  Hostname:    default-k8s-diff-port-440341
	Capacity:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	Allocatable:
	  cpu:                2
	  ephemeral-storage:  17734596Ki
	  hugepages-2Mi:      0
	  memory:             2164188Ki
	  pods:               110
	System Info:
	  Machine ID:                 a1b94d723bbc44fdae20abac75fa217c
	  System UUID:                a1b94d72-3bbc-44fd-ae20-abac75fa217c
	  Boot ID:                    b763291f-3f1d-4c8f-a3df-481acb31857c
	  Kernel Version:             5.10.207
	  OS Image:                   Buildroot 2023.02.9
	  Operating System:           linux
	  Architecture:               amd64
	  Container Runtime Version:  cri-o://1.29.1
	  Kubelet Version:            v1.28.4
	  Kube-Proxy Version:         v1.28.4
	PodCIDR:                      10.244.0.0/24
	PodCIDRs:                     10.244.0.0/24
	Non-terminated Pods:          (9 in total)
	  Namespace                   Name                                                    CPU Requests  CPU Limits  Memory Requests  Memory Limits  Age
	  ---------                   ----                                                    ------------  ----------  ---------------  -------------  ---
	  kube-system                 coredns-5dd5756b68-g4dzq                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 coredns-5dd5756b68-qkhfs                                100m (5%!)(MISSING)     0 (0%!)(MISSING)      70Mi (3%!)(MISSING)        170Mi (8%!)(MISSING)     16m
	  kube-system                 etcd-default-k8s-diff-port-440341                       100m (5%!)(MISSING)     0 (0%!)(MISSING)      100Mi (4%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 kube-apiserver-default-k8s-diff-port-440341             250m (12%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-controller-manager-default-k8s-diff-port-440341    200m (10%!)(MISSING)    0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-proxy-h7hdc                                        0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 kube-scheduler-default-k8s-diff-port-440341             100m (5%!)(MISSING)     0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	  kube-system                 metrics-server-57f55c9bc5-p7s4d                         100m (5%!)(MISSING)     0 (0%!)(MISSING)      200Mi (9%!)(MISSING)       0 (0%!)(MISSING)         16m
	  kube-system                 storage-provisioner                                     0 (0%!)(MISSING)        0 (0%!)(MISSING)      0 (0%!)(MISSING)           0 (0%!)(MISSING)         16m
	Allocated resources:
	  (Total limits may be over 100 percent, i.e., overcommitted.)
	  Resource           Requests     Limits
	  --------           --------     ------
	  cpu                950m (47%!)(MISSING)   0 (0%!)(MISSING)
	  memory             440Mi (20%!)(MISSING)  340Mi (16%!)(MISSING)
	  ephemeral-storage  0 (0%!)(MISSING)       0 (0%!)(MISSING)
	  hugepages-2Mi      0 (0%!)(MISSING)       0 (0%!)(MISSING)
	Events:
	  Type    Reason                   Age                From             Message
	  ----    ------                   ----               ----             -------
	  Normal  Starting                 16m                kube-proxy       
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m (x8 over 16m)  kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m (x7 over 16m)  kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  Starting                 16m                kubelet          Starting kubelet.
	  Normal  NodeHasSufficientMemory  16m                kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasSufficientMemory
	  Normal  NodeHasNoDiskPressure    16m                kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasNoDiskPressure
	  Normal  NodeHasSufficientPID     16m                kubelet          Node default-k8s-diff-port-440341 status is now: NodeHasSufficientPID
	  Normal  NodeAllocatableEnforced  16m                kubelet          Updated Node Allocatable limit across pods
	  Normal  RegisteredNode           16m                node-controller  Node default-k8s-diff-port-440341 event: Registered Node default-k8s-diff-port-440341 in Controller
	
	
	==> dmesg <==
	[  +0.046737] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[Mar14 19:25] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.580134] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +2.524848] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000006] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000001] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.010175] systemd-fstab-generator[615]: Ignoring "noauto" option for root device
	[  +0.058083] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.069272] systemd-fstab-generator[627]: Ignoring "noauto" option for root device
	[  +0.175764] systemd-fstab-generator[641]: Ignoring "noauto" option for root device
	[  +0.177169] systemd-fstab-generator[653]: Ignoring "noauto" option for root device
	[  +0.323997] systemd-fstab-generator[678]: Ignoring "noauto" option for root device
	[  +5.901003] systemd-fstab-generator[778]: Ignoring "noauto" option for root device
	[  +0.058042] kauditd_printk_skb: 130 callbacks suppressed
	[  +1.894055] systemd-fstab-generator[901]: Ignoring "noauto" option for root device
	[  +5.697405] kauditd_printk_skb: 97 callbacks suppressed
	[ +11.992905] kauditd_printk_skb: 77 callbacks suppressed
	[Mar14 19:30] kauditd_printk_skb: 5 callbacks suppressed
	[  +1.917254] systemd-fstab-generator[3421]: Ignoring "noauto" option for root device
	[  +4.822406] kauditd_printk_skb: 55 callbacks suppressed
	[  +2.973079] systemd-fstab-generator[3748]: Ignoring "noauto" option for root device
	[ +12.558475] systemd-fstab-generator[3947]: Ignoring "noauto" option for root device
	[  +0.106553] kauditd_printk_skb: 14 callbacks suppressed
	[Mar14 19:31] kauditd_printk_skb: 80 callbacks suppressed
	[Mar14 19:46] hrtimer: interrupt took 5086400 ns
	
	
	==> etcd [f2e22802c857e595313e8ffed12f8ea74ae79654da84f7c59fc68dbd38c35da9] <==
	{"level":"info","ts":"2024-03-14T19:30:16.128119Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c became pre-candidate at term 1"}
	{"level":"info","ts":"2024-03-14T19:30:16.128142Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c received MsgPreVoteResp from a40e8c9f94d8225c at term 1"}
	{"level":"info","ts":"2024-03-14T19:30:16.128156Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c became candidate at term 2"}
	{"level":"info","ts":"2024-03-14T19:30:16.128162Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c received MsgVoteResp from a40e8c9f94d8225c at term 2"}
	{"level":"info","ts":"2024-03-14T19:30:16.12817Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"a40e8c9f94d8225c became leader at term 2"}
	{"level":"info","ts":"2024-03-14T19:30:16.128177Z","logger":"raft","caller":"etcdserver/zap_raft.go:77","msg":"raft.node: a40e8c9f94d8225c elected leader a40e8c9f94d8225c at term 2"}
	{"level":"info","ts":"2024-03-14T19:30:16.132129Z","caller":"etcdserver/server.go:2571","msg":"setting up initial cluster version using v2 API","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:30:16.136882Z","caller":"etcdserver/server.go:2062","msg":"published local member to cluster through raft","local-member-id":"a40e8c9f94d8225c","local-member-attributes":"{Name:default-k8s-diff-port-440341 ClientURLs:[https://192.168.61.88:2379]}","request-path":"/0/members/a40e8c9f94d8225c/attributes","cluster-id":"d6b7474e07060719","publish-timeout":"7s"}
	{"level":"info","ts":"2024-03-14T19:30:16.136946Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:30:16.140218Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"192.168.61.88:2379"}
	{"level":"info","ts":"2024-03-14T19:30:16.140286Z","caller":"embed/serve.go:103","msg":"ready to serve client requests"}
	{"level":"info","ts":"2024-03-14T19:30:16.141084Z","caller":"embed/serve.go:250","msg":"serving client traffic securely","traffic":"grpc+http","address":"127.0.0.1:2379"}
	{"level":"info","ts":"2024-03-14T19:30:16.141449Z","caller":"membership/cluster.go:584","msg":"set initial cluster version","cluster-id":"d6b7474e07060719","local-member-id":"a40e8c9f94d8225c","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:30:16.141808Z","caller":"api/capability.go:75","msg":"enabled capabilities for version","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:30:16.141905Z","caller":"etcdserver/server.go:2595","msg":"cluster version is updated","cluster-version":"3.5"}
	{"level":"info","ts":"2024-03-14T19:30:16.187223Z","caller":"etcdmain/main.go:44","msg":"notifying init daemon"}
	{"level":"info","ts":"2024-03-14T19:30:16.187284Z","caller":"etcdmain/main.go:50","msg":"successfully notified init daemon"}
	{"level":"info","ts":"2024-03-14T19:40:16.229027Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":704}
	{"level":"info","ts":"2024-03-14T19:40:16.231345Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":704,"took":"1.900846ms","hash":1773800546}
	{"level":"info","ts":"2024-03-14T19:40:16.231461Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":1773800546,"revision":704,"compact-revision":-1}
	{"level":"info","ts":"2024-03-14T19:44:31.940403Z","caller":"traceutil/trace.go:171","msg":"trace[103126965] transaction","detail":"{read_only:false; response_revision:1156; number_of_response:1; }","duration":"132.014787ms","start":"2024-03-14T19:44:31.808338Z","end":"2024-03-14T19:44:31.940352Z","steps":["trace[103126965] 'process raft request'  (duration: 131.828267ms)"],"step_count":1}
	{"level":"info","ts":"2024-03-14T19:45:16.237626Z","caller":"mvcc/index.go:214","msg":"compact tree index","revision":947}
	{"level":"info","ts":"2024-03-14T19:45:16.23958Z","caller":"mvcc/kvstore_compaction.go:66","msg":"finished scheduled compaction","compact-revision":947,"took":"1.386981ms","hash":2505201216}
	{"level":"info","ts":"2024-03-14T19:45:16.239877Z","caller":"mvcc/hash.go:137","msg":"storing new hash","hash":2505201216,"revision":947,"compact-revision":704}
	{"level":"info","ts":"2024-03-14T19:45:42.628476Z","caller":"traceutil/trace.go:171","msg":"trace[454329670] transaction","detail":"{read_only:false; response_revision:1213; number_of_response:1; }","duration":"206.130022ms","start":"2024-03-14T19:45:42.422319Z","end":"2024-03-14T19:45:42.628449Z","steps":["trace[454329670] 'process raft request'  (duration: 205.984082ms)"],"step_count":1}
	
	
	==> kernel <==
	 19:46:46 up 21 min,  0 users,  load average: 0.24, 0.15, 0.16
	Linux default-k8s-diff-port-440341 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kube-apiserver [e071343bddeb260f6aae138da323d35bb8919bad790bc11ab42c79b999b5c8c8] <==
	W0314 19:43:19.156180       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:43:19.156287       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:43:19.156320       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 19:44:18.017563       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	I0314 19:45:18.017218       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 19:45:18.160331       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:45:18.160495       1 controller.go:146] Error updating APIService "v1beta1.metrics.k8s.io" with err: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:45:18.160825       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 19:45:19.161059       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:45:19.161200       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:45:19.161245       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:45:19.161096       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:45:19.161395       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:45:19.162289       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	I0314 19:46:18.017742       1 handler.go:232] Adding GroupVersion metrics.k8s.io v1beta1 to ResourceManager
	W0314 19:46:19.161835       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:46:19.162108       1 controller.go:113] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: Error, could not get list of group versions for APIService
	I0314 19:46:19.162253       1 controller.go:126] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	W0314 19:46:19.162480       1 handler_proxy.go:93] no RequestInfo found in the context
	E0314 19:46:19.162535       1 controller.go:102] loading OpenAPI spec for "v1beta1.metrics.k8s.io" failed with: failed to download v1beta1.metrics.k8s.io: failed to retrieve openAPI spec, http error: ResponseCode: 503, Body: service unavailable
	, Header: map[Content-Type:[text/plain; charset=utf-8] X-Content-Type-Options:[nosniff]]
	I0314 19:46:19.164247       1 controller.go:109] OpenAPI AggregationController: action for item v1beta1.metrics.k8s.io: Rate Limited Requeue.
	
	
	==> kube-controller-manager [978f9b7e919fb85a862ca88deab99fb2dc10c7e28a3b187f958108dc1e972295] <==
	I0314 19:41:03.749762       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:41:33.230220       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:41:33.760233       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 19:41:49.759916       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="307.558µs"
	E0314 19:42:03.236433       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:42:03.769072       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	I0314 19:42:04.757896       1 replica_set.go:676] "Finished syncing" kind="ReplicaSet" key="kube-system/metrics-server-57f55c9bc5" duration="135.406µs"
	E0314 19:42:33.241921       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:42:33.778406       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:43:03.248787       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:43:03.787712       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:43:33.255237       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:43:33.797226       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:44:03.261095       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:44:03.808683       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:44:33.268463       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:44:33.818287       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:45:03.273767       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:45:03.828748       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:45:33.280616       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:45:33.838951       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:46:03.286234       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:46:03.848639       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	E0314 19:46:33.292169       1 resource_quota_controller.go:440] unable to retrieve the complete list of server APIs: metrics.k8s.io/v1beta1: stale GroupVersion discovery: metrics.k8s.io/v1beta1
	I0314 19:46:33.860765       1 garbagecollector.go:816] "failed to discover some groups" groups="<internal error: json: unsupported type: map[schema.GroupVersion]error>"
	
	
	==> kube-proxy [b90a8fc0140117e11736c5e998cf4ecf4fd65d03db8f29c724e16adab62164d1] <==
	I0314 19:30:35.435124       1 server_others.go:69] "Using iptables proxy"
	I0314 19:30:35.479155       1 node.go:141] Successfully retrieved node IP: 192.168.61.88
	I0314 19:30:35.712378       1 server_others.go:121] "No iptables support for family" ipFamily="IPv6"
	I0314 19:30:35.712431       1 server.go:634] "kube-proxy running in single-stack mode" ipFamily="IPv4"
	I0314 19:30:35.715701       1 server_others.go:152] "Using iptables Proxier"
	I0314 19:30:35.741086       1 proxier.go:251] "Setting route_localnet=1 to allow node-ports on localhost; to change this either disable iptables.localhostNodePorts (--iptables-localhost-nodeports) or set nodePortAddresses (--nodeport-addresses) to filter loopback addresses"
	I0314 19:30:35.741384       1 server.go:846] "Version info" version="v1.28.4"
	I0314 19:30:35.741397       1 server.go:848] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK=""
	I0314 19:30:35.773192       1 config.go:188] "Starting service config controller"
	I0314 19:30:35.774265       1 shared_informer.go:311] Waiting for caches to sync for service config
	I0314 19:30:35.774342       1 config.go:315] "Starting node config controller"
	I0314 19:30:35.774350       1 shared_informer.go:311] Waiting for caches to sync for node config
	I0314 19:30:35.774875       1 config.go:97] "Starting endpoint slice config controller"
	I0314 19:30:35.774911       1 shared_informer.go:311] Waiting for caches to sync for endpoint slice config
	I0314 19:30:35.875108       1 shared_informer.go:318] Caches are synced for endpoint slice config
	I0314 19:30:35.875193       1 shared_informer.go:318] Caches are synced for service config
	I0314 19:30:35.875458       1 shared_informer.go:318] Caches are synced for node config
	
	
	==> kube-scheduler [093b07559507142186ee81654ae4cf97ce86f061922b12368c9877cbf96b771f] <==
	W0314 19:30:18.282536       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	E0314 19:30:18.284669       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Service: failed to list *v1.Service: services is forbidden: User "system:kube-scheduler" cannot list resource "services" in API group "" at the cluster scope
	W0314 19:30:18.283663       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	E0314 19:30:18.284686       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.Node: failed to list *v1.Node: nodes is forbidden: User "system:kube-scheduler" cannot list resource "nodes" in API group "" at the cluster scope
	W0314 19:30:18.283733       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	E0314 19:30:18.284702       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIStorageCapacity: failed to list *v1.CSIStorageCapacity: csistoragecapacities.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csistoragecapacities" in API group "storage.k8s.io" at the cluster scope
	W0314 19:30:18.284227       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 19:30:18.285213       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 19:30:19.116850       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	E0314 19:30:19.117234       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PodDisruptionBudget: failed to list *v1.PodDisruptionBudget: poddisruptionbudgets.policy is forbidden: User "system:kube-scheduler" cannot list resource "poddisruptionbudgets" in API group "policy" at the cluster scope
	W0314 19:30:19.119726       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	E0314 19:30:19.119934       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.PersistentVolume: failed to list *v1.PersistentVolume: persistentvolumes is forbidden: User "system:kube-scheduler" cannot list resource "persistentvolumes" in API group "" at the cluster scope
	W0314 19:30:19.163104       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	E0314 19:30:19.163219       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope
	W0314 19:30:19.196882       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	E0314 19:30:19.197510       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.CSINode: failed to list *v1.CSINode: csinodes.storage.k8s.io is forbidden: User "system:kube-scheduler" cannot list resource "csinodes" in API group "storage.k8s.io" at the cluster scope
	W0314 19:30:19.280098       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	E0314 19:30:19.280150       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.StatefulSet: failed to list *v1.StatefulSet: statefulsets.apps is forbidden: User "system:kube-scheduler" cannot list resource "statefulsets" in API group "apps" at the cluster scope
	W0314 19:30:19.284653       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	E0314 19:30:19.284713       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicaSet: failed to list *v1.ReplicaSet: replicasets.apps is forbidden: User "system:kube-scheduler" cannot list resource "replicasets" in API group "apps" at the cluster scope
	W0314 19:30:19.291339       1 reflector.go:535] vendor/k8s.io/client-go/informers/factory.go:150: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	E0314 19:30:19.291388       1 reflector.go:147] vendor/k8s.io/client-go/informers/factory.go:150: Failed to watch *v1.ReplicationController: failed to list *v1.ReplicationController: replicationcontrollers is forbidden: User "system:kube-scheduler" cannot list resource "replicationcontrollers" in API group "" at the cluster scope
	W0314 19:30:19.632261       1 reflector.go:535] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	E0314 19:30:19.632313       1 reflector.go:147] pkg/server/dynamiccertificates/configmap_cafile_content.go:206: Failed to watch *v1.ConfigMap: failed to list *v1.ConfigMap: configmaps "extension-apiserver-authentication" is forbidden: User "system:kube-scheduler" cannot list resource "configmaps" in API group "" in the namespace "kube-system"
	I0314 19:30:21.955251       1 shared_informer.go:318] Caches are synced for client-ca::kube-system::extension-apiserver-authentication::client-ca-file
	
	
	==> kubelet <==
	Mar 14 19:44:21 default-k8s-diff-port-440341 kubelet[3755]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:44:29 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:44:29.737874    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:44:40 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:44:40.736921    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:44:52 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:44:52.739647    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:45:03 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:45:03.738257    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:45:17 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:45:17.737357    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:45:21 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:45:21.765169    3755 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:45:21 default-k8s-diff-port-440341 kubelet[3755]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:45:21 default-k8s-diff-port-440341 kubelet[3755]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:45:21 default-k8s-diff-port-440341 kubelet[3755]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:45:21 default-k8s-diff-port-440341 kubelet[3755]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:45:31 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:45:31.739029    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:45:45 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:45:45.738119    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:45:58 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:45:58.738582    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:46:12 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:46:12.737713    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:46:21 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:46:21.761427    3755 iptables.go:575] "Could not set up iptables canary" err=<
	Mar 14 19:46:21 default-k8s-diff-port-440341 kubelet[3755]:         error creating chain "KUBE-KUBELET-CANARY": exit status 3: Ignoring deprecated --wait-interval option.
	Mar 14 19:46:21 default-k8s-diff-port-440341 kubelet[3755]:         ip6tables v1.8.9 (legacy): can't initialize ip6tables table `nat': Table does not exist (do you need to insmod?)
	Mar 14 19:46:21 default-k8s-diff-port-440341 kubelet[3755]:         Perhaps ip6tables or your kernel needs to be upgraded.
	Mar 14 19:46:21 default-k8s-diff-port-440341 kubelet[3755]:  > table="nat" chain="KUBE-KUBELET-CANARY"
	Mar 14 19:46:27 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:46:27.740688    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ImagePullBackOff: \"Back-off pulling image \\\"fake.domain/registry.k8s.io/echoserver:1.4\\\"\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	Mar 14 19:46:40 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:46:40.756378    3755 remote_image.go:180] "PullImage from image service failed" err="rpc error: code = Unknown desc = pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 19:46:40 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:46:40.756477    3755 kuberuntime_image.go:53] "Failed to pull image" err="pinging container registry fake.domain: Get \"https://fake.domain/v2/\": dial tcp: lookup fake.domain: no such host" image="fake.domain/registry.k8s.io/echoserver:1.4"
	Mar 14 19:46:40 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:46:40.756853    3755 kuberuntime_manager.go:1261] container &Container{Name:metrics-server,Image:fake.domain/registry.k8s.io/echoserver:1.4,Command:[],Args:[--cert-dir=/tmp --secure-port=4443 --kubelet-preferred-address-types=InternalIP,ExternalIP,Hostname --kubelet-use-node-status-port --metric-resolution=60s --kubelet-insecure-tls],WorkingDir:,Ports:[]ContainerPort{ContainerPort{Name:https,HostPort:0,ContainerPort:4443,Protocol:TCP,HostIP:,},},Env:[]EnvVar{},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{cpu: {{100 -3} {<nil>} 100m DecimalSI},memory: {{209715200 0} {<nil>}  BinarySI},},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tmp-dir,ReadOnly:false,MountPath:/tmp,SubPath:,MountPropagation:nil,SubPathExpr:,},VolumeMount{Name:kube-api-access-8mmtg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,},},LivenessProbe
:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/livez,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{1 0 https},Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:1,PeriodSeconds:10,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:nil,Privileged:nil,SELinuxOptions:nil,RunAsUser:*1000,RunAsNonRoot:*true,ReadOnlyRootFilesystem:*true,AllowPrivilegeEscalation:nil,RunAsGroup:nil,ProcMount:nil,WindowsOptions:nil,SeccompProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessa
gePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod metrics-server-57f55c9bc5-p7s4d_kube-system(1b13ae7e-62a0-429c-bf4f-0f38b222db7e): ErrImagePull: pinging container registry fake.domain: Get "https://fake.domain/v2/": dial tcp: lookup fake.domain: no such host
	Mar 14 19:46:40 default-k8s-diff-port-440341 kubelet[3755]: E0314 19:46:40.757087    3755 pod_workers.go:1300] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"metrics-server\" with ErrImagePull: \"pinging container registry fake.domain: Get \\\"https://fake.domain/v2/\\\": dial tcp: lookup fake.domain: no such host\"" pod="kube-system/metrics-server-57f55c9bc5-p7s4d" podUID="1b13ae7e-62a0-429c-bf4f-0f38b222db7e"
	
	
	==> storage-provisioner [18d1cd32af6f4480fa9acae30922cb1716bd501d4d4b630436279aaf2ff32e85] <==
	I0314 19:30:37.091465       1 storage_provisioner.go:116] Initializing the minikube storage provisioner...
	I0314 19:30:37.111088       1 storage_provisioner.go:141] Storage provisioner initialized, now starting service!
	I0314 19:30:37.111165       1 leaderelection.go:243] attempting to acquire leader lease kube-system/k8s.io-minikube-hostpath...
	I0314 19:30:37.121950       1 leaderelection.go:253] successfully acquired lease kube-system/k8s.io-minikube-hostpath
	I0314 19:30:37.122159       1 controller.go:835] Starting provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-440341_14eef2c4-1b3e-492d-9da5-2d2c448f03c3!
	I0314 19:30:37.123430       1 event.go:282] Event(v1.ObjectReference{Kind:"Endpoints", Namespace:"kube-system", Name:"k8s.io-minikube-hostpath", UID:"3bd26e97-7a14-45c9-a3b4-49c925374eec", APIVersion:"v1", ResourceVersion:"445", FieldPath:""}): type: 'Normal' reason: 'LeaderElection' default-k8s-diff-port-440341_14eef2c4-1b3e-492d-9da5-2d2c448f03c3 became leader
	I0314 19:30:37.222391       1 controller.go:884] Started provisioner controller k8s.io/minikube-hostpath_default-k8s-diff-port-440341_14eef2c4-1b3e-492d-9da5-2d2c448f03c3!
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341
helpers_test.go:261: (dbg) Run:  kubectl --context default-k8s-diff-port-440341 get po -o=jsonpath={.items[*].metadata.name} -A --field-selector=status.phase!=Running
helpers_test.go:272: non-running pods: metrics-server-57f55c9bc5-p7s4d
helpers_test.go:274: ======> post-mortem[TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop]: describe non-running pods <======
helpers_test.go:277: (dbg) Run:  kubectl --context default-k8s-diff-port-440341 describe pod metrics-server-57f55c9bc5-p7s4d
helpers_test.go:277: (dbg) Non-zero exit: kubectl --context default-k8s-diff-port-440341 describe pod metrics-server-57f55c9bc5-p7s4d: exit status 1 (93.750867ms)

                                                
                                                
** stderr ** 
	Error from server (NotFound): pods "metrics-server-57f55c9bc5-p7s4d" not found

                                                
                                                
** /stderr **
helpers_test.go:279: kubectl --context default-k8s-diff-port-440341 describe pod metrics-server-57f55c9bc5-p7s4d: exit status 1
--- FAIL: TestStartStop/group/default-k8s-diff-port/serial/AddonExistsAfterStop (424.76s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (109.94s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop
start_stop_delete_test.go:287: (dbg) TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: waiting 9m0s for pods matching "k8s-app=kubernetes-dashboard" in namespace "kubernetes-dashboard" ...
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
E0314 19:42:14.527994  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
E0314 19:42:14.853721  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
helpers_test.go:329: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: WARNING: pod list for "kubernetes-dashboard" "k8s-app=kubernetes-dashboard" returned: Get "https://192.168.72.211:8443/api/v1/namespaces/kubernetes-dashboard/pods?labelSelector=k8s-app%3Dkubernetes-dashboard": dial tcp 192.168.72.211:8443: connect: connection refused
start_stop_delete_test.go:287: ***** TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop: pod "k8s-app=kubernetes-dashboard" failed to start within 9m0s: context deadline exceeded ****
start_stop_delete_test.go:287: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-968094 -n old-k8s-version-968094
start_stop_delete_test.go:287: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 2 (250.03405ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:287: status error: exit status 2 (may be ok)
start_stop_delete_test.go:287: "old-k8s-version-968094" apiserver is not running, skipping kubectl commands (state="Stopped")
start_stop_delete_test.go:288: failed waiting for 'addon dashboard' pod post-stop-start: k8s-app=kubernetes-dashboard within 9m0s: context deadline exceeded
start_stop_delete_test.go:291: (dbg) Run:  kubectl --context old-k8s-version-968094 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard
start_stop_delete_test.go:291: (dbg) Non-zero exit: kubectl --context old-k8s-version-968094 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard: context deadline exceeded (1.925µs)
start_stop_delete_test.go:293: failed to get info on kubernetes-dashboard deployments. args "kubectl --context old-k8s-version-968094 describe deploy/dashboard-metrics-scraper -n kubernetes-dashboard": context deadline exceeded
start_stop_delete_test.go:297: addon did not load correct image. Expected to contain " registry.k8s.io/echoserver:1.4". Addon deployment info: 
helpers_test.go:222: -----------------------post-mortem--------------------------------
helpers_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094
helpers_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 2 (255.003831ms)

                                                
                                                
-- stdout --
	Running

                                                
                                                
-- /stdout --
helpers_test.go:239: status error: exit status 2 (may be ok)
helpers_test.go:244: <<< TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop FAILED: start of post-mortem logs <<<
helpers_test.go:245: ======>  post-mortem[TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop]: minikube logs <======
helpers_test.go:247: (dbg) Run:  out/minikube-linux-amd64 -p old-k8s-version-968094 logs -n 25
helpers_test.go:247: (dbg) Done: out/minikube-linux-amd64 -p old-k8s-version-968094 logs -n 25: (1.544584695s)
helpers_test.go:252: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop logs: 
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| Command |                          Args                          |           Profile            |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	| ssh     | -p NoKubernetes-578974 sudo                            | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC |                     |
	|         | systemctl is-active --quiet                            |                              |         |         |                     |                     |
	|         | service kubelet                                        |                              |         |         |                     |                     |
	| delete  | -p NoKubernetes-578974                                 | NoKubernetes-578974          | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:14 UTC |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:14 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| delete  | -p kubernetes-upgrade-097195                           | kubernetes-upgrade-097195    | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:15 UTC |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:15 UTC | 14 Mar 24 19:17 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| start   | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --memory=2048                                          |                              |         |         |                     |                     |
	|         | --cert-expiration=8760h                                |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	| delete  | -p cert-expiration-525214                              | cert-expiration-525214       | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	| delete  | -p                                                     | disable-driver-mounts-993602 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | disable-driver-mounts-993602                           |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:18 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p no-preload-731976             | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC | 14 Mar 24 19:16 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:16 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p embed-certs-992669            | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC | 14 Mar 24 19:17 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:17 UTC |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p default-k8s-diff-port-440341  | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC | 14 Mar 24 19:18 UTC |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| stop    | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable metrics-server -p old-k8s-version-968094        | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:18 UTC |                     |
	|         | --images=MetricsServer=registry.k8s.io/echoserver:1.4  |                              |         |         |                     |                     |
	|         | --registries=MetricsServer=fake.domain                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p no-preload-731976                  | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p no-preload-731976                                   | no-preload-731976            | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:30 UTC |
	|         | --memory=2200 --alsologtostderr                        |                              |         |         |                     |                     |
	|         | --wait=true --preload=false                            |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2                      |                              |         |         |                     |                     |
	| addons  | enable dashboard -p embed-certs-992669                 | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p embed-certs-992669                                  | embed-certs-992669           | jenkins | v1.32.0 | 14 Mar 24 19:19 UTC | 14 Mar 24 19:29 UTC |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --embed-certs --driver=kvm2                            |                              |         |         |                     |                     |
	|         |  --container-runtime=crio                              |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	| stop    | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --alsologtostderr -v=3                                 |                              |         |         |                     |                     |
	| addons  | enable dashboard -p old-k8s-version-968094             | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC | 14 Mar 24 19:20 UTC |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p old-k8s-version-968094                              | old-k8s-version-968094       | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --kvm-network=default                                  |                              |         |         |                     |                     |
	|         | --kvm-qemu-uri=qemu:///system                          |                              |         |         |                     |                     |
	|         | --disable-driver-mounts                                |                              |         |         |                     |                     |
	|         | --keep-context=false                                   |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0                           |                              |         |         |                     |                     |
	| addons  | enable dashboard -p default-k8s-diff-port-440341       | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:20 UTC |                     |
	|         | --images=MetricsScraper=registry.k8s.io/echoserver:1.4 |                              |         |         |                     |                     |
	| start   | -p                                                     | default-k8s-diff-port-440341 | jenkins | v1.32.0 | 14 Mar 24 19:21 UTC | 14 Mar 24 19:30 UTC |
	|         | default-k8s-diff-port-440341                           |                              |         |         |                     |                     |
	|         | --memory=2200                                          |                              |         |         |                     |                     |
	|         | --alsologtostderr --wait=true                          |                              |         |         |                     |                     |
	|         | --apiserver-port=8444                                  |                              |         |         |                     |                     |
	|         | --driver=kvm2                                          |                              |         |         |                     |                     |
	|         | --container-runtime=crio                               |                              |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4                           |                              |         |         |                     |                     |
	|---------|--------------------------------------------------------|------------------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 19:21:06
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 19:21:06.641191  992563 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:21:06.641325  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641335  992563 out.go:304] Setting ErrFile to fd 2...
	I0314 19:21:06.641339  992563 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:21:06.641562  992563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:21:06.642133  992563 out.go:298] Setting JSON to false
	I0314 19:21:06.643097  992563 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":97419,"bootTime":1710346648,"procs":201,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:21:06.643154  992563 start.go:139] virtualization: kvm guest
	I0314 19:21:06.645619  992563 out.go:177] * [default-k8s-diff-port-440341] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:21:06.646948  992563 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:21:06.646951  992563 notify.go:220] Checking for updates...
	I0314 19:21:06.648183  992563 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:21:06.649479  992563 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:21:06.650646  992563 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:21:06.651793  992563 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:21:06.652871  992563 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:21:06.654306  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:21:06.654679  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.654715  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.669822  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39007
	I0314 19:21:06.670226  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.670730  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.670752  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.671113  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.671298  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.671562  992563 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:21:06.671894  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:21:06.671955  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:21:06.686096  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36019
	I0314 19:21:06.686486  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:21:06.686930  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:21:06.686950  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:21:06.687304  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:21:06.687516  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:21:06.719775  992563 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 19:21:06.721100  992563 start.go:297] selected driver: kvm2
	I0314 19:21:06.721112  992563 start.go:901] validating driver "kvm2" against &{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesCo
nfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks
:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.721237  992563 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:21:06.722206  992563 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.722303  992563 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 19:21:06.737068  992563 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 19:21:06.737396  992563 start_flags.go:947] Waiting for all components: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:21:06.737423  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:21:06.737430  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:21:06.737470  992563 start.go:340] cluster config:
	{Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Names
pace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-h
ost Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:21:06.737562  992563 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 19:21:06.739290  992563 out.go:177] * Starting "default-k8s-diff-port-440341" primary control-plane node in "default-k8s-diff-port-440341" cluster
	I0314 19:21:06.456441  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:06.740612  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:21:06.740639  992563 preload.go:147] Found local preload: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4
	I0314 19:21:06.740649  992563 cache.go:56] Caching tarball of preloaded images
	I0314 19:21:06.740716  992563 preload.go:173] Found /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 in cache, skipping download
	I0314 19:21:06.740727  992563 cache.go:59] Finished verifying existence of preloaded tar for v1.28.4 on crio
	I0314 19:21:06.740828  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:21:06.741044  992563 start.go:360] acquireMachinesLock for default-k8s-diff-port-440341: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:21:09.528474  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:15.608487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:18.680465  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:24.760483  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:27.832487  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:33.912460  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:36.984446  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:43.064437  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:46.136461  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:52.216505  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:21:55.288457  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:01.368528  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:04.440444  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:10.520511  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:13.592559  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:19.672501  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:22.744517  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:28.824450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:31.896452  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:37.976513  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:41.048520  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:47.128498  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:50.200540  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:56.280558  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:22:59.352482  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:05.432488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:08.504481  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:14.584488  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:17.656515  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:23.736418  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:26.808447  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:32.888521  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:35.960649  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:42.040524  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:45.112450  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:51.192455  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:23:54.264715  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:00.344497  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:03.416432  991880 main.go:141] libmachine: Error dialing TCP: dial tcp 192.168.39.148:22: connect: no route to host
	I0314 19:24:06.421344  992056 start.go:364] duration metric: took 4m13.372196869s to acquireMachinesLock for "embed-certs-992669"
	I0314 19:24:06.421482  992056 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:06.421491  992056 fix.go:54] fixHost starting: 
	I0314 19:24:06.421996  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:06.422035  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:06.437799  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:43367
	I0314 19:24:06.438270  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:06.438847  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:24:06.438870  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:06.439255  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:06.439520  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:06.439648  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:24:06.441355  992056 fix.go:112] recreateIfNeeded on embed-certs-992669: state=Stopped err=<nil>
	I0314 19:24:06.441396  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	W0314 19:24:06.441578  992056 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:06.443265  992056 out.go:177] * Restarting existing kvm2 VM for "embed-certs-992669" ...
	I0314 19:24:06.444639  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Start
	I0314 19:24:06.444811  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring networks are active...
	I0314 19:24:06.445562  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network default is active
	I0314 19:24:06.445907  992056 main.go:141] libmachine: (embed-certs-992669) Ensuring network mk-embed-certs-992669 is active
	I0314 19:24:06.446291  992056 main.go:141] libmachine: (embed-certs-992669) Getting domain xml...
	I0314 19:24:06.446865  992056 main.go:141] libmachine: (embed-certs-992669) Creating domain...
	I0314 19:24:07.655936  992056 main.go:141] libmachine: (embed-certs-992669) Waiting to get IP...
	I0314 19:24:07.657162  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.657691  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.657795  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.657671  993021 retry.go:31] will retry after 279.188222ms: waiting for machine to come up
	I0314 19:24:07.938384  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:07.938890  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:07.938914  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:07.938842  993021 retry.go:31] will retry after 362.619543ms: waiting for machine to come up
	I0314 19:24:06.418272  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:06.418393  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.418709  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:24:06.418745  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:24:06.419028  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:24:06.421200  991880 machine.go:97] duration metric: took 4m37.410478688s to provisionDockerMachine
	I0314 19:24:06.421248  991880 fix.go:56] duration metric: took 4m37.431639776s for fixHost
	I0314 19:24:06.421257  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 4m37.431664509s
	W0314 19:24:06.421278  991880 start.go:713] error starting host: provision: host is not running
	W0314 19:24:06.421471  991880 out.go:239] ! StartHost failed, but will try again: provision: host is not running
	I0314 19:24:06.421480  991880 start.go:728] Will try again in 5 seconds ...
	I0314 19:24:08.303564  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.304022  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.304049  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.303988  993021 retry.go:31] will retry after 299.406141ms: waiting for machine to come up
	I0314 19:24:08.605486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:08.605955  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:08.605983  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:08.605903  993021 retry.go:31] will retry after 438.174832ms: waiting for machine to come up
	I0314 19:24:09.045423  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.045943  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.045985  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.045874  993021 retry.go:31] will retry after 484.342881ms: waiting for machine to come up
	I0314 19:24:09.531525  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:09.531992  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:09.532032  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:09.531943  993021 retry.go:31] will retry after 680.030854ms: waiting for machine to come up
	I0314 19:24:10.213303  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:10.213760  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:10.213787  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:10.213714  993021 retry.go:31] will retry after 1.051377672s: waiting for machine to come up
	I0314 19:24:11.267112  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:11.267711  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:11.267736  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:11.267647  993021 retry.go:31] will retry after 1.45882013s: waiting for machine to come up
	I0314 19:24:12.729033  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:12.729529  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:12.729565  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:12.729476  993021 retry.go:31] will retry after 1.6586819s: waiting for machine to come up
	I0314 19:24:11.423018  991880 start.go:360] acquireMachinesLock for no-preload-731976: {Name:mk9a566594d7aef48d36f06eee60109ab60ed27a Clock:{} Delay:500ms Timeout:13m0s Cancel:<nil>}
	I0314 19:24:14.390304  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:14.390783  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:14.390813  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:14.390731  993021 retry.go:31] will retry after 1.484880543s: waiting for machine to come up
	I0314 19:24:15.877389  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:15.877877  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:15.877907  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:15.877817  993021 retry.go:31] will retry after 2.524223695s: waiting for machine to come up
	I0314 19:24:18.405110  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:18.405486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:18.405517  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:18.405433  993021 retry.go:31] will retry after 3.354970224s: waiting for machine to come up
	I0314 19:24:21.761886  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:21.762325  992056 main.go:141] libmachine: (embed-certs-992669) DBG | unable to find current IP address of domain embed-certs-992669 in network mk-embed-certs-992669
	I0314 19:24:21.762374  992056 main.go:141] libmachine: (embed-certs-992669) DBG | I0314 19:24:21.762285  993021 retry.go:31] will retry after 3.996500899s: waiting for machine to come up
	I0314 19:24:27.129245  992344 start.go:364] duration metric: took 3m53.310661355s to acquireMachinesLock for "old-k8s-version-968094"
	I0314 19:24:27.129312  992344 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:27.129324  992344 fix.go:54] fixHost starting: 
	I0314 19:24:27.129726  992344 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:27.129761  992344 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:27.150444  992344 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44743
	I0314 19:24:27.150921  992344 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:27.151429  992344 main.go:141] libmachine: Using API Version  1
	I0314 19:24:27.151453  992344 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:27.151859  992344 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:27.152058  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:27.152265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetState
	I0314 19:24:27.153847  992344 fix.go:112] recreateIfNeeded on old-k8s-version-968094: state=Stopped err=<nil>
	I0314 19:24:27.153876  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	W0314 19:24:27.154051  992344 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:27.156243  992344 out.go:177] * Restarting existing kvm2 VM for "old-k8s-version-968094" ...
	I0314 19:24:25.763430  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.763935  992056 main.go:141] libmachine: (embed-certs-992669) Found IP for machine: 192.168.50.213
	I0314 19:24:25.763962  992056 main.go:141] libmachine: (embed-certs-992669) Reserving static IP address...
	I0314 19:24:25.763974  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has current primary IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.764419  992056 main.go:141] libmachine: (embed-certs-992669) Reserved static IP address: 192.168.50.213
	I0314 19:24:25.764444  992056 main.go:141] libmachine: (embed-certs-992669) Waiting for SSH to be available...
	I0314 19:24:25.764467  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.764546  992056 main.go:141] libmachine: (embed-certs-992669) DBG | skip adding static IP to network mk-embed-certs-992669 - found existing host DHCP lease matching {name: "embed-certs-992669", mac: "52:54:00:05:e0:54", ip: "192.168.50.213"}
	I0314 19:24:25.764568  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Getting to WaitForSSH function...
	I0314 19:24:25.766675  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767018  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.767048  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.767190  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH client type: external
	I0314 19:24:25.767237  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa (-rw-------)
	I0314 19:24:25.767278  992056 main.go:141] libmachine: (embed-certs-992669) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.50.213 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:25.767299  992056 main.go:141] libmachine: (embed-certs-992669) DBG | About to run SSH command:
	I0314 19:24:25.767312  992056 main.go:141] libmachine: (embed-certs-992669) DBG | exit 0
	I0314 19:24:25.892385  992056 main.go:141] libmachine: (embed-certs-992669) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:25.892837  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetConfigRaw
	I0314 19:24:25.893525  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:25.895998  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896372  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.896411  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.896708  992056 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/config.json ...
	I0314 19:24:25.896897  992056 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:25.896917  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:25.897155  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:25.899572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899856  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:25.899882  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:25.899979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:25.900241  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:25.900594  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:25.900763  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:25.901166  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:25.901185  992056 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:26.013286  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:26.013326  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013609  992056 buildroot.go:166] provisioning hostname "embed-certs-992669"
	I0314 19:24:26.013640  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.013843  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.016614  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017006  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.017041  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.017202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.017397  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017596  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.017746  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.017903  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.018131  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.018152  992056 main.go:141] libmachine: About to run SSH command:
	sudo hostname embed-certs-992669 && echo "embed-certs-992669" | sudo tee /etc/hostname
	I0314 19:24:26.143977  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: embed-certs-992669
	
	I0314 19:24:26.144009  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.146661  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147021  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.147052  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.147182  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.147387  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147542  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.147677  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.147856  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.148037  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.148053  992056 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sembed-certs-992669' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 embed-certs-992669/g' /etc/hosts;
				else 
					echo '127.0.1.1 embed-certs-992669' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:26.266363  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:26.266400  992056 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:26.266421  992056 buildroot.go:174] setting up certificates
	I0314 19:24:26.266430  992056 provision.go:84] configureAuth start
	I0314 19:24:26.266439  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetMachineName
	I0314 19:24:26.266755  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:26.269450  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269803  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.269833  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.269979  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.272179  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272519  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.272572  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.272709  992056 provision.go:143] copyHostCerts
	I0314 19:24:26.272812  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:26.272823  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:26.272892  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:26.272992  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:26.273007  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:26.273034  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:26.273086  992056 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:26.273093  992056 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:26.273113  992056 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:26.273199  992056 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.embed-certs-992669 san=[127.0.0.1 192.168.50.213 embed-certs-992669 localhost minikube]
	I0314 19:24:26.424098  992056 provision.go:177] copyRemoteCerts
	I0314 19:24:26.424165  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:26.424193  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.426870  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427216  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.427293  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.427367  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.427559  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.427745  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.427889  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.514935  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:26.542295  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1224 bytes)
	I0314 19:24:26.568557  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1675 bytes)
	I0314 19:24:26.595238  992056 provision.go:87] duration metric: took 328.794871ms to configureAuth
	I0314 19:24:26.595266  992056 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:26.595465  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:24:26.595587  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.598447  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598776  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.598810  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.598958  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.599149  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599341  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.599446  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.599576  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:26.599763  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:26.599784  992056 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:26.883323  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:26.883363  992056 machine.go:97] duration metric: took 986.450882ms to provisionDockerMachine
	I0314 19:24:26.883378  992056 start.go:293] postStartSetup for "embed-certs-992669" (driver="kvm2")
	I0314 19:24:26.883393  992056 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:26.883425  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:26.883799  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:26.883840  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:26.886707  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887088  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:26.887121  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:26.887271  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:26.887471  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:26.887685  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:26.887842  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:26.972276  992056 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:26.977397  992056 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:26.977452  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:26.977557  992056 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:26.977660  992056 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:26.977771  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:26.989997  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:27.015656  992056 start.go:296] duration metric: took 132.26294ms for postStartSetup
	I0314 19:24:27.015701  992056 fix.go:56] duration metric: took 20.594210437s for fixHost
	I0314 19:24:27.015723  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.018428  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018779  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.018820  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.018934  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.019141  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019322  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.019477  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.019663  992056 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:27.019904  992056 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.50.213 22 <nil> <nil>}
	I0314 19:24:27.019918  992056 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:27.129041  992056 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444267.099898940
	
	I0314 19:24:27.129062  992056 fix.go:216] guest clock: 1710444267.099898940
	I0314 19:24:27.129070  992056 fix.go:229] Guest: 2024-03-14 19:24:27.09989894 +0000 UTC Remote: 2024-03-14 19:24:27.015704928 +0000 UTC m=+274.119026995 (delta=84.194012ms)
	I0314 19:24:27.129129  992056 fix.go:200] guest clock delta is within tolerance: 84.194012ms
	I0314 19:24:27.129134  992056 start.go:83] releasing machines lock for "embed-certs-992669", held for 20.707742604s
	I0314 19:24:27.129165  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.129445  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:27.132300  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132666  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.132696  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.132891  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133513  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133729  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:24:27.133832  992056 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:27.133885  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.133989  992056 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:27.134020  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:24:27.136789  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137077  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137149  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137173  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137340  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:27.137486  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:27.137532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.137694  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.137732  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:24:27.137870  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.138017  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:24:27.138177  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:24:27.138423  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:24:27.241866  992056 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:27.248597  992056 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:27.398034  992056 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:27.404793  992056 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:27.404866  992056 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:27.425321  992056 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:27.425347  992056 start.go:494] detecting cgroup driver to use...
	I0314 19:24:27.425441  992056 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:27.446847  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:27.463193  992056 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:27.463248  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:27.477995  992056 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:27.494158  992056 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:27.626812  992056 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:27.788432  992056 docker.go:233] disabling docker service ...
	I0314 19:24:27.788504  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:27.805552  992056 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:27.820563  992056 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:27.961941  992056 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:28.083364  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:28.099491  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:28.121026  992056 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:24:28.121100  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.133361  992056 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:28.133445  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.145489  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.158112  992056 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:28.171221  992056 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:28.184604  992056 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:28.196001  992056 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:28.196052  992056 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:28.212800  992056 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:28.225099  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:28.353741  992056 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:28.497018  992056 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:28.497123  992056 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:28.502406  992056 start.go:562] Will wait 60s for crictl version
	I0314 19:24:28.502464  992056 ssh_runner.go:195] Run: which crictl
	I0314 19:24:28.506848  992056 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:28.546552  992056 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:28.546640  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.580646  992056 ssh_runner.go:195] Run: crio --version
	I0314 19:24:28.613244  992056 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:24:27.157735  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .Start
	I0314 19:24:27.157923  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring networks are active...
	I0314 19:24:27.158602  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network default is active
	I0314 19:24:27.158940  992344 main.go:141] libmachine: (old-k8s-version-968094) Ensuring network mk-old-k8s-version-968094 is active
	I0314 19:24:27.159464  992344 main.go:141] libmachine: (old-k8s-version-968094) Getting domain xml...
	I0314 19:24:27.160230  992344 main.go:141] libmachine: (old-k8s-version-968094) Creating domain...
	I0314 19:24:28.397890  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting to get IP...
	I0314 19:24:28.398964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.399389  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.399455  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.399366  993151 retry.go:31] will retry after 254.808358ms: waiting for machine to come up
	I0314 19:24:28.655922  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.656383  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.656414  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.656329  993151 retry.go:31] will retry after 305.278558ms: waiting for machine to come up
	I0314 19:24:28.614866  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetIP
	I0314 19:24:28.618114  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618550  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:24:28.618595  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:24:28.618875  992056 ssh_runner.go:195] Run: grep 192.168.50.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:28.623905  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.50.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:28.637729  992056 kubeadm.go:877] updating cluster {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:
false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:28.637900  992056 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:24:28.637976  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:28.679943  992056 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:24:28.680020  992056 ssh_runner.go:195] Run: which lz4
	I0314 19:24:28.684879  992056 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:28.689966  992056 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:28.690002  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:24:30.647436  992056 crio.go:444] duration metric: took 1.962590984s to copy over tarball
	I0314 19:24:30.647522  992056 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:28.963796  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:28.964329  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:28.964360  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:28.964283  993151 retry.go:31] will retry after 405.241077ms: waiting for machine to come up
	I0314 19:24:29.371107  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.371677  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.371724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.371634  993151 retry.go:31] will retry after 392.618577ms: waiting for machine to come up
	I0314 19:24:29.766406  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:29.766893  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:29.766916  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:29.766848  993151 retry.go:31] will retry after 540.221203ms: waiting for machine to come up
	I0314 19:24:30.308703  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:30.309134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:30.309165  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:30.309075  993151 retry.go:31] will retry after 919.467685ms: waiting for machine to come up
	I0314 19:24:31.230536  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:31.231022  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:31.231055  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:31.230955  993151 retry.go:31] will retry after 1.096403831s: waiting for machine to come up
	I0314 19:24:32.329625  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:32.330123  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:32.330150  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:32.330079  993151 retry.go:31] will retry after 959.221478ms: waiting for machine to come up
	I0314 19:24:33.291448  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:33.291863  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:33.291896  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:33.291811  993151 retry.go:31] will retry after 1.719262878s: waiting for machine to come up
	I0314 19:24:33.418411  992056 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (2.770860454s)
	I0314 19:24:33.418444  992056 crio.go:451] duration metric: took 2.770963996s to extract the tarball
	I0314 19:24:33.418458  992056 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:33.461358  992056 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:33.512360  992056 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:24:33.512392  992056 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:24:33.512403  992056 kubeadm.go:928] updating node { 192.168.50.213 8443 v1.28.4 crio true true} ...
	I0314 19:24:33.512647  992056 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=embed-certs-992669 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.50.213
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:33.512740  992056 ssh_runner.go:195] Run: crio config
	I0314 19:24:33.572013  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:33.572042  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:33.572058  992056 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:33.572089  992056 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.50.213 APIServerPort:8443 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:embed-certs-992669 NodeName:embed-certs-992669 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.50.213"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.50.213 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticPodP
ath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:24:33.572310  992056 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.50.213
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "embed-certs-992669"
	  kubeletExtraArgs:
	    node-ip: 192.168.50.213
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.50.213"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:33.572391  992056 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:24:33.583442  992056 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:33.583514  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:33.593833  992056 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (318 bytes)
	I0314 19:24:33.611517  992056 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:33.630287  992056 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2162 bytes)
	I0314 19:24:33.649961  992056 ssh_runner.go:195] Run: grep 192.168.50.213	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:33.654803  992056 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.50.213	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:33.669018  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:33.787097  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:33.806023  992056 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669 for IP: 192.168.50.213
	I0314 19:24:33.806049  992056 certs.go:194] generating shared ca certs ...
	I0314 19:24:33.806076  992056 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:33.806256  992056 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:33.806310  992056 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:33.806325  992056 certs.go:256] generating profile certs ...
	I0314 19:24:33.806434  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/client.key
	I0314 19:24:33.806536  992056 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key.c0728cf7
	I0314 19:24:33.806597  992056 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key
	I0314 19:24:33.806759  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:33.806801  992056 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:33.806815  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:33.806850  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:33.806890  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:33.806919  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:33.806982  992056 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:33.807845  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:33.856253  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:33.912784  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:33.954957  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:33.993293  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1428 bytes)
	I0314 19:24:34.037089  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1671 bytes)
	I0314 19:24:34.064883  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:34.091958  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/embed-certs-992669/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:34.118801  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:24:34.145200  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:24:34.177627  992056 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:34.205768  992056 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:24:34.228516  992056 ssh_runner.go:195] Run: openssl version
	I0314 19:24:34.236753  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:24:34.251464  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257801  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.257854  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:24:34.264945  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:24:34.277068  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:24:34.289085  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294602  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.294670  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:24:34.301147  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:24:34.313131  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:24:34.324658  992056 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329681  992056 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.329741  992056 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:24:34.336033  992056 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:24:34.347545  992056 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:24:34.352395  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:24:34.358770  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:24:34.364979  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:24:34.371983  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:24:34.378320  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:24:34.385155  992056 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:24:34.392023  992056 kubeadm.go:391] StartCluster: {Name:embed-certs-992669 KeepContext:false EmbedCerts:true MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.28
.4 ClusterName:embed-certs-992669 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fal
se MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:24:34.392123  992056 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:24:34.392163  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.431071  992056 cri.go:89] found id: ""
	I0314 19:24:34.431146  992056 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:24:34.442517  992056 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:24:34.442537  992056 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:24:34.442543  992056 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:24:34.442591  992056 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:24:34.452897  992056 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:24:34.453878  992056 kubeconfig.go:125] found "embed-certs-992669" server: "https://192.168.50.213:8443"
	I0314 19:24:34.456056  992056 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:24:34.466222  992056 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.50.213
	I0314 19:24:34.466280  992056 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:24:34.466297  992056 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:24:34.466350  992056 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:24:34.514040  992056 cri.go:89] found id: ""
	I0314 19:24:34.514150  992056 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:24:34.532904  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:24:34.543553  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:24:34.543572  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:24:34.543621  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:24:34.553476  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:24:34.553537  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:24:34.564032  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:24:34.573782  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:24:34.573880  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:24:34.584510  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.595906  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:24:34.595970  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:24:34.610866  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:24:34.623752  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:24:34.623808  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:24:34.634364  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:24:34.645735  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:34.774124  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.518494  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.777109  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.873101  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:35.991242  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:24:35.991340  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.491712  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:36.991589  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:24:37.035324  992056 api_server.go:72] duration metric: took 1.044079871s to wait for apiserver process to appear ...
	I0314 19:24:37.035360  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:24:37.035414  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:37.036045  992056 api_server.go:269] stopped: https://192.168.50.213:8443/healthz: Get "https://192.168.50.213:8443/healthz": dial tcp 192.168.50.213:8443: connect: connection refused
	I0314 19:24:37.535727  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:35.013374  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:35.013750  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:35.013781  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:35.013702  993151 retry.go:31] will retry after 1.413824554s: waiting for machine to come up
	I0314 19:24:36.429118  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:36.429704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:36.429738  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:36.429643  993151 retry.go:31] will retry after 2.349477476s: waiting for machine to come up
	I0314 19:24:40.106309  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.106348  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.106381  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.155310  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:24:40.155352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:24:40.535833  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:40.544840  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:40.544869  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.036483  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.049323  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:24:41.049352  992056 api_server.go:103] status: https://192.168.50.213:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:24:41.536465  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:24:41.542411  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:24:41.550034  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:24:41.550066  992056 api_server.go:131] duration metric: took 4.514697227s to wait for apiserver health ...
	I0314 19:24:41.550078  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:24:41.550086  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:41.551967  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:24:41.553380  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:24:41.564892  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:24:41.585838  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:24:41.600993  992056 system_pods.go:59] 8 kube-system pods found
	I0314 19:24:41.601025  992056 system_pods.go:61] "coredns-5dd5756b68-jpsr6" [80728635-786f-442e-80be-811e3292128b] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:24:41.601037  992056 system_pods.go:61] "etcd-embed-certs-992669" [4bd7ff48-fe02-4b55-b1f5-cf195efae581] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:24:41.601043  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [2a5f81e9-4943-47d9-a705-e91b802bd506] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:24:41.601052  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [50904b48-cbc6-494c-8ed0-ef558f20513c] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:24:41.601057  992056 system_pods.go:61] "kube-proxy-nsgs6" [d26d8d3f-04ca-4f68-9016-48552bcdc2f4] Running / Ready:ContainersNotReady (containers with unready status: [kube-proxy]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-proxy])
	I0314 19:24:41.601062  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [bf535a02-78be-44b0-8ebb-338754867930] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:24:41.601067  992056 system_pods.go:61] "metrics-server-57f55c9bc5-w8cj6" [398e104c-24c4-45db-94fb-44188cfa85a8] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:24:41.601071  992056 system_pods.go:61] "storage-provisioner" [66abcc06-9867-4617-afc1-3fa370940f80] Running / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:24:41.601077  992056 system_pods.go:74] duration metric: took 15.215121ms to wait for pod list to return data ...
	I0314 19:24:41.601085  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:24:41.606110  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:24:41.606135  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:24:41.606146  992056 node_conditions.go:105] duration metric: took 5.056699ms to run NodePressure ...
	I0314 19:24:41.606163  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:24:41.842508  992056 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850325  992056 kubeadm.go:733] kubelet initialised
	I0314 19:24:41.850344  992056 kubeadm.go:734] duration metric: took 7.804586ms waiting for restarted kubelet to initialise ...
	I0314 19:24:41.850352  992056 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:24:41.857067  992056 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.861933  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861954  992056 pod_ready.go:81] duration metric: took 4.862588ms for pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.861963  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "coredns-5dd5756b68-jpsr6" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.861971  992056 pod_ready.go:78] waiting up to 4m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.869015  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869044  992056 pod_ready.go:81] duration metric: took 7.059854ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.869055  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "etcd-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.869063  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.877475  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877499  992056 pod_ready.go:81] duration metric: took 8.426466ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.877517  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.877525  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:41.989806  992056 pod_ready.go:97] node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989836  992056 pod_ready.go:81] duration metric: took 112.302268ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	E0314 19:24:41.989846  992056 pod_ready.go:66] WaitExtra: waitPodCondition: node "embed-certs-992669" hosting pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace is currently not "Ready" (skipping!): node "embed-certs-992669" has status "Ready":"False"
	I0314 19:24:41.989852  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390883  992056 pod_ready.go:92] pod "kube-proxy-nsgs6" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:42.390916  992056 pod_ready.go:81] duration metric: took 401.05393ms for pod "kube-proxy-nsgs6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:42.390929  992056 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:38.781555  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:38.782105  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:38.782134  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:38.782060  993151 retry.go:31] will retry after 3.062702235s: waiting for machine to come up
	I0314 19:24:41.846373  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:41.846889  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:41.846928  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:41.846822  993151 retry.go:31] will retry after 3.245094913s: waiting for machine to come up
	I0314 19:24:44.397857  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:46.400091  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:45.093425  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:45.093821  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | unable to find current IP address of domain old-k8s-version-968094 in network mk-old-k8s-version-968094
	I0314 19:24:45.093848  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | I0314 19:24:45.093766  993151 retry.go:31] will retry after 4.695140566s: waiting for machine to come up
	I0314 19:24:51.181742  992563 start.go:364] duration metric: took 3m44.440656871s to acquireMachinesLock for "default-k8s-diff-port-440341"
	I0314 19:24:51.181827  992563 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:24:51.181839  992563 fix.go:54] fixHost starting: 
	I0314 19:24:51.182279  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:24:51.182325  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:24:51.202636  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40723
	I0314 19:24:51.203153  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:24:51.203703  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:24:51.203732  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:24:51.204197  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:24:51.204404  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:24:51.204622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:24:51.206147  992563 fix.go:112] recreateIfNeeded on default-k8s-diff-port-440341: state=Stopped err=<nil>
	I0314 19:24:51.206184  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	W0314 19:24:51.206365  992563 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:24:51.208359  992563 out.go:177] * Restarting existing kvm2 VM for "default-k8s-diff-port-440341" ...
	I0314 19:24:51.209719  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Start
	I0314 19:24:51.209912  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring networks are active...
	I0314 19:24:51.210618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network default is active
	I0314 19:24:51.210996  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Ensuring network mk-default-k8s-diff-port-440341 is active
	I0314 19:24:51.211386  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Getting domain xml...
	I0314 19:24:51.212126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Creating domain...
	I0314 19:24:49.791977  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792478  992344 main.go:141] libmachine: (old-k8s-version-968094) Found IP for machine: 192.168.72.211
	I0314 19:24:49.792509  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has current primary IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.792519  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserving static IP address...
	I0314 19:24:49.792964  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.792995  992344 main.go:141] libmachine: (old-k8s-version-968094) Reserved static IP address: 192.168.72.211
	I0314 19:24:49.793028  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | skip adding static IP to network mk-old-k8s-version-968094 - found existing host DHCP lease matching {name: "old-k8s-version-968094", mac: "52:54:00:45:00:8a", ip: "192.168.72.211"}
	I0314 19:24:49.793049  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Getting to WaitForSSH function...
	I0314 19:24:49.793060  992344 main.go:141] libmachine: (old-k8s-version-968094) Waiting for SSH to be available...
	I0314 19:24:49.795809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796119  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.796155  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.796340  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH client type: external
	I0314 19:24:49.796365  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa (-rw-------)
	I0314 19:24:49.796399  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.72.211 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:24:49.796418  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | About to run SSH command:
	I0314 19:24:49.796437  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | exit 0
	I0314 19:24:49.928364  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | SSH cmd err, output: <nil>: 
	I0314 19:24:49.928849  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetConfigRaw
	I0314 19:24:49.929565  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:49.932065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932543  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.932575  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.932818  992344 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/config.json ...
	I0314 19:24:49.933027  992344 machine.go:94] provisionDockerMachine start ...
	I0314 19:24:49.933049  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:49.933280  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:49.935870  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936260  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:49.936292  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:49.936447  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:49.936649  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936821  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:49.936940  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:49.937112  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:49.937318  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:49.937331  992344 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:24:50.053144  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:24:50.053184  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053461  992344 buildroot.go:166] provisioning hostname "old-k8s-version-968094"
	I0314 19:24:50.053495  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.053715  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.056663  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057034  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.057061  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.057265  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.057486  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057647  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.057775  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.057990  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.058167  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.058181  992344 main.go:141] libmachine: About to run SSH command:
	sudo hostname old-k8s-version-968094 && echo "old-k8s-version-968094" | sudo tee /etc/hostname
	I0314 19:24:50.190002  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: old-k8s-version-968094
	
	I0314 19:24:50.190030  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.192892  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193306  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.193343  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.193578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.193825  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194002  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.194128  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.194298  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.194472  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.194493  992344 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sold-k8s-version-968094' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 old-k8s-version-968094/g' /etc/hosts;
				else 
					echo '127.0.1.1 old-k8s-version-968094' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:24:50.322939  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:24:50.322975  992344 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:24:50.323003  992344 buildroot.go:174] setting up certificates
	I0314 19:24:50.323016  992344 provision.go:84] configureAuth start
	I0314 19:24:50.323026  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetMachineName
	I0314 19:24:50.323344  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:50.326376  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.326798  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.326827  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.327082  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.329704  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.329994  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.330026  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.330131  992344 provision.go:143] copyHostCerts
	I0314 19:24:50.330206  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:24:50.330223  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:24:50.330299  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:24:50.330426  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:24:50.330435  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:24:50.330472  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:24:50.330549  992344 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:24:50.330560  992344 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:24:50.330584  992344 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:24:50.330649  992344 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.old-k8s-version-968094 san=[127.0.0.1 192.168.72.211 localhost minikube old-k8s-version-968094]
	I0314 19:24:50.471374  992344 provision.go:177] copyRemoteCerts
	I0314 19:24:50.471438  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:24:50.471469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.474223  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474570  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.474608  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.474773  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.474969  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.475149  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.475261  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:50.563859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:24:50.593259  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:24:50.624146  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1233 bytes)
	I0314 19:24:50.651113  992344 provision.go:87] duration metric: took 328.081801ms to configureAuth
	I0314 19:24:50.651158  992344 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:24:50.651348  992344 config.go:182] Loaded profile config "old-k8s-version-968094": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:24:50.651445  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.654716  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655065  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.655096  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.655328  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.655552  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655730  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.655870  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.656012  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:50.656191  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:50.656223  992344 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:24:50.925456  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:24:50.925492  992344 machine.go:97] duration metric: took 992.449429ms to provisionDockerMachine
	I0314 19:24:50.925508  992344 start.go:293] postStartSetup for "old-k8s-version-968094" (driver="kvm2")
	I0314 19:24:50.925518  992344 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:24:50.925535  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:50.925909  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:24:50.925957  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:50.928724  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929100  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:50.929124  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:50.929292  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:50.929469  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:50.929606  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:50.929718  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.020664  992344 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:24:51.025418  992344 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:24:51.025449  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:24:51.025530  992344 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:24:51.025642  992344 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:24:51.025732  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:24:51.036808  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:51.062597  992344 start.go:296] duration metric: took 137.076655ms for postStartSetup
	I0314 19:24:51.062641  992344 fix.go:56] duration metric: took 23.933315476s for fixHost
	I0314 19:24:51.062667  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.065408  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.065766  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.065809  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.066008  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.066241  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066426  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.066578  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.066751  992344 main.go:141] libmachine: Using SSH client type: native
	I0314 19:24:51.066923  992344 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.72.211 22 <nil> <nil>}
	I0314 19:24:51.066934  992344 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:24:51.181564  992344 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444291.127685902
	
	I0314 19:24:51.181593  992344 fix.go:216] guest clock: 1710444291.127685902
	I0314 19:24:51.181604  992344 fix.go:229] Guest: 2024-03-14 19:24:51.127685902 +0000 UTC Remote: 2024-03-14 19:24:51.062645814 +0000 UTC m=+257.398231189 (delta=65.040088ms)
	I0314 19:24:51.181630  992344 fix.go:200] guest clock delta is within tolerance: 65.040088ms
	I0314 19:24:51.181636  992344 start.go:83] releasing machines lock for "old-k8s-version-968094", held for 24.052354261s
	I0314 19:24:51.181662  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.181979  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:51.185086  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185444  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.185482  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.185683  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186150  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186369  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .DriverName
	I0314 19:24:51.186475  992344 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:24:51.186530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.186600  992344 ssh_runner.go:195] Run: cat /version.json
	I0314 19:24:51.186628  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHHostname
	I0314 19:24:51.189328  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189665  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189739  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.189769  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.189909  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190069  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:51.190091  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:51.190096  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190278  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190372  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHPort
	I0314 19:24:51.190419  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.190530  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHKeyPath
	I0314 19:24:51.190693  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetSSHUsername
	I0314 19:24:51.190870  992344 sshutil.go:53] new ssh client: &{IP:192.168.72.211 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/old-k8s-version-968094/id_rsa Username:docker}
	I0314 19:24:51.273691  992344 ssh_runner.go:195] Run: systemctl --version
	I0314 19:24:51.304581  992344 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:24:51.462596  992344 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:24:51.469505  992344 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:24:51.469580  992344 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:24:51.488042  992344 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:24:51.488064  992344 start.go:494] detecting cgroup driver to use...
	I0314 19:24:51.488127  992344 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:24:51.506331  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:24:51.521263  992344 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:24:51.521310  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:24:51.535346  992344 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:24:51.554784  992344 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:24:51.695072  992344 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:24:51.861752  992344 docker.go:233] disabling docker service ...
	I0314 19:24:51.861822  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:24:51.886279  992344 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:24:51.908899  992344 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:24:52.059911  992344 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:24:52.216861  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:24:52.236554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:24:52.262549  992344 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.2" pause image...
	I0314 19:24:52.262629  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.2"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.277311  992344 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:24:52.277405  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.292485  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.307327  992344 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:24:52.323517  992344 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:24:52.337431  992344 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:24:52.350647  992344 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:24:52.350744  992344 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:24:52.371679  992344 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:24:52.384810  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:52.540285  992344 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:24:52.710717  992344 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:24:52.710812  992344 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:24:52.716025  992344 start.go:562] Will wait 60s for crictl version
	I0314 19:24:52.716079  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:52.720670  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:24:52.760376  992344 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:24:52.760453  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.795912  992344 ssh_runner.go:195] Run: crio --version
	I0314 19:24:52.829365  992344 out.go:177] * Preparing Kubernetes v1.20.0 on CRI-O 1.29.1 ...
	I0314 19:24:48.899626  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:50.899777  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:52.830745  992344 main.go:141] libmachine: (old-k8s-version-968094) Calling .GetIP
	I0314 19:24:52.834322  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.834813  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:45:00:8a", ip: ""} in network mk-old-k8s-version-968094: {Iface:virbr4 ExpiryTime:2024-03-14 20:14:31 +0000 UTC Type:0 Mac:52:54:00:45:00:8a Iaid: IPaddr:192.168.72.211 Prefix:24 Hostname:old-k8s-version-968094 Clientid:01:52:54:00:45:00:8a}
	I0314 19:24:52.834846  992344 main.go:141] libmachine: (old-k8s-version-968094) DBG | domain old-k8s-version-968094 has defined IP address 192.168.72.211 and MAC address 52:54:00:45:00:8a in network mk-old-k8s-version-968094
	I0314 19:24:52.835148  992344 ssh_runner.go:195] Run: grep 192.168.72.1	host.minikube.internal$ /etc/hosts
	I0314 19:24:52.840664  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.72.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:52.855935  992344 kubeadm.go:877] updating cluster {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersio
n:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:fa
lse MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:24:52.856085  992344 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 19:24:52.856143  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:52.917316  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:52.917384  992344 ssh_runner.go:195] Run: which lz4
	I0314 19:24:52.923732  992344 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:24:52.929018  992344 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:24:52.929045  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (473237281 bytes)
	I0314 19:24:52.555382  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting to get IP...
	I0314 19:24:52.556296  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556767  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.556831  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.556746  993275 retry.go:31] will retry after 250.179074ms: waiting for machine to come up
	I0314 19:24:52.808339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.808989  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:52.809024  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:52.808935  993275 retry.go:31] will retry after 257.317639ms: waiting for machine to come up
	I0314 19:24:53.068134  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068762  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.068810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.068737  993275 retry.go:31] will retry after 427.477171ms: waiting for machine to come up
	I0314 19:24:53.498274  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498751  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.498783  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.498710  993275 retry.go:31] will retry after 414.04038ms: waiting for machine to come up
	I0314 19:24:53.914418  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.914970  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:53.915003  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:53.914922  993275 retry.go:31] will retry after 698.808984ms: waiting for machine to come up
	I0314 19:24:54.616167  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616671  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:54.616733  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:54.616625  993275 retry.go:31] will retry after 627.573493ms: waiting for machine to come up
	I0314 19:24:55.245579  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246152  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:55.246193  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:55.246077  993275 retry.go:31] will retry after 827.444645ms: waiting for machine to come up
	I0314 19:24:56.075132  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075586  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:56.075657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:56.075577  993275 retry.go:31] will retry after 1.317575549s: waiting for machine to come up
	I0314 19:24:53.400660  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:55.906584  992056 pod_ready.go:102] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:56.899301  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:24:56.899342  992056 pod_ready.go:81] duration metric: took 14.508394033s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:56.899353  992056 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	I0314 19:24:55.007168  992344 crio.go:444] duration metric: took 2.08347164s to copy over tarball
	I0314 19:24:55.007258  992344 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:24:58.484792  992344 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.477465904s)
	I0314 19:24:58.484844  992344 crio.go:451] duration metric: took 3.47764437s to extract the tarball
	I0314 19:24:58.484855  992344 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:24:58.531628  992344 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:24:58.586436  992344 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.20.0". assuming images are not preloaded.
	I0314 19:24:58.586467  992344 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.20.0 registry.k8s.io/kube-controller-manager:v1.20.0 registry.k8s.io/kube-scheduler:v1.20.0 registry.k8s.io/kube-proxy:v1.20.0 registry.k8s.io/pause:3.2 registry.k8s.io/etcd:3.4.13-0 registry.k8s.io/coredns:1.7.0 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.586644  992344 image.go:134] retrieving image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.586686  992344 image.go:134] retrieving image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.586732  992344 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.586594  992344 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.586795  992344 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/pause:3.2
	I0314 19:24:58.586598  992344 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588701  992344 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.588708  992344 image.go:177] daemon lookup for registry.k8s.io/pause:3.2: Error response from daemon: No such image: registry.k8s.io/pause:3.2
	I0314 19:24:58.588712  992344 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.588743  992344 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.20.0: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.588700  992344 image.go:177] daemon lookup for registry.k8s.io/coredns:1.7.0: Error response from daemon: No such image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.588773  992344 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:58.588717  992344 image.go:177] daemon lookup for registry.k8s.io/etcd:3.4.13-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:57.395510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.395966  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:57.396012  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:57.395926  993275 retry.go:31] will retry after 1.349742787s: waiting for machine to come up
	I0314 19:24:58.747273  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747764  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:24:58.747790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:24:58.747711  993275 retry.go:31] will retry after 1.715984886s: waiting for machine to come up
	I0314 19:25:00.465630  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466197  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:00.466272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:00.466159  993275 retry.go:31] will retry after 2.291989797s: waiting for machine to come up
	I0314 19:24:58.949160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:01.407335  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:24:58.745061  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:58.748854  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:58.755753  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:58.757595  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:58.776672  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:58.785641  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns:1.7.0
	I0314 19:24:58.803868  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.2
	I0314 19:24:58.878866  992344 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:24:59.049142  992344 cache_images.go:116] "registry.k8s.io/etcd:3.4.13-0" needs transfer: "registry.k8s.io/etcd:3.4.13-0" does not exist at hash "0369cf4303ffdb467dc219990960a9baa8512a54b0ad9283eaf55bd6c0adb934" in container runtime
	I0314 19:24:59.049192  992344 cache_images.go:116] "registry.k8s.io/pause:3.2" needs transfer: "registry.k8s.io/pause:3.2" does not exist at hash "80d28bedfe5dec59da9ebf8e6260224ac9008ab5c11dbbe16ee3ba3e4439ac2c" in container runtime
	I0314 19:24:59.049238  992344 cri.go:218] Removing image: registry.k8s.io/pause:3.2
	I0314 19:24:59.049245  992344 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.20.0" needs transfer: "registry.k8s.io/kube-scheduler:v1.20.0" does not exist at hash "3138b6e3d471224fd516f758f3b53309219bcb6824e07686b3cd60d78012c899" in container runtime
	I0314 19:24:59.049206  992344 cri.go:218] Removing image: registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.049275  992344 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.049297  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.049321  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058360  992344 cache_images.go:116] "registry.k8s.io/coredns:1.7.0" needs transfer: "registry.k8s.io/coredns:1.7.0" does not exist at hash "bfe3a36ebd2528b454be6aebece806db5b40407b833e2af9617bf39afaff8c16" in container runtime
	I0314 19:24:59.058394  992344 cri.go:218] Removing image: registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.058429  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058471  992344 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.20.0" needs transfer: "registry.k8s.io/kube-apiserver:v1.20.0" does not exist at hash "ca9843d3b545457f24b012d6d579ba85f132f2406aa171ad84d53caa55e5de99" in container runtime
	I0314 19:24:59.058508  992344 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.058530  992344 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.20.0" needs transfer: "registry.k8s.io/kube-controller-manager:v1.20.0" does not exist at hash "b9fa1895dcaa6d3dd241d6d9340e939ca30fc0946464ec9f205a8cbe738a8080" in container runtime
	I0314 19:24:59.058550  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058560  992344 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.058580  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.058506  992344 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.20.0" needs transfer: "registry.k8s.io/kube-proxy:v1.20.0" does not exist at hash "10cc881966cfd9287656c2fce1f144625602653d1e8b011487a7a71feb100bdc" in container runtime
	I0314 19:24:59.058620  992344 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.058668  992344 ssh_runner.go:195] Run: which crictl
	I0314 19:24:59.179879  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/pause:3.2
	I0314 19:24:59.179903  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.20.0
	I0314 19:24:59.179964  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.4.13-0
	I0314 19:24:59.180018  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns:1.7.0
	I0314 19:24:59.180048  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.20.0
	I0314 19:24:59.180057  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.20.0
	I0314 19:24:59.180158  992344 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.20.0
	I0314 19:24:59.353654  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0
	I0314 19:24:59.353726  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/pause_3.2
	I0314 19:24:59.353834  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.20.0
	I0314 19:24:59.353886  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.4.13-0
	I0314 19:24:59.353951  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.20.0
	I0314 19:24:59.353992  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.20.0
	I0314 19:24:59.356778  992344 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns_1.7.0
	I0314 19:24:59.356828  992344 cache_images.go:92] duration metric: took 770.342451ms to LoadCachedImages
	W0314 19:24:59.356913  992344 out.go:239] X Unable to load cached images: loading cached images: stat /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.20.0: no such file or directory
	I0314 19:24:59.356940  992344 kubeadm.go:928] updating node { 192.168.72.211 8443 v1.20.0 crio true true} ...
	I0314 19:24:59.357079  992344 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.20.0/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --container-runtime=remote --container-runtime-endpoint=unix:///var/run/crio/crio.sock --hostname-override=old-k8s-version-968094 --kubeconfig=/etc/kubernetes/kubelet.conf --network-plugin=cni --node-ip=192.168.72.211
	
	[Install]
	 config:
	{KubernetesVersion:v1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:24:59.357158  992344 ssh_runner.go:195] Run: crio config
	I0314 19:24:59.412340  992344 cni.go:84] Creating CNI manager for ""
	I0314 19:24:59.412369  992344 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:24:59.412383  992344 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:24:59.412401  992344 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.72.211 APIServerPort:8443 KubernetesVersion:v1.20.0 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:old-k8s-version-968094 NodeName:old-k8s-version-968094 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.72.211"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.72.211 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt St
aticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:false}
	I0314 19:24:59.412538  992344 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.72.211
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: /var/run/crio/crio.sock
	  name: "old-k8s-version-968094"
	  kubeletExtraArgs:
	    node-ip: 192.168.72.211
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta2
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.72.211"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	dns:
	  type: CoreDNS
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.20.0
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:24:59.412599  992344 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.20.0
	I0314 19:24:59.424508  992344 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:24:59.424568  992344 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:24:59.435744  992344 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (430 bytes)
	I0314 19:24:59.456291  992344 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:24:59.476542  992344 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2123 bytes)
	I0314 19:24:59.496114  992344 ssh_runner.go:195] Run: grep 192.168.72.211	control-plane.minikube.internal$ /etc/hosts
	I0314 19:24:59.500824  992344 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.72.211	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:24:59.515178  992344 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:24:59.658035  992344 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:24:59.677735  992344 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094 for IP: 192.168.72.211
	I0314 19:24:59.677764  992344 certs.go:194] generating shared ca certs ...
	I0314 19:24:59.677788  992344 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:24:59.677986  992344 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:24:59.678055  992344 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:24:59.678073  992344 certs.go:256] generating profile certs ...
	I0314 19:24:59.678209  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.key
	I0314 19:24:59.678288  992344 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key.8692dcff
	I0314 19:24:59.678358  992344 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key
	I0314 19:24:59.678538  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:24:59.678589  992344 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:24:59.678602  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:24:59.678684  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:24:59.678751  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:24:59.678787  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:24:59.678858  992344 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:24:59.679859  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:24:59.720965  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:24:59.758643  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:24:59.791205  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:24:59.832034  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1432 bytes)
	I0314 19:24:59.864634  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:24:59.912167  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:24:59.941168  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1675 bytes)
	I0314 19:24:59.969896  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:24:59.998999  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:00.029688  992344 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:00.062406  992344 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:00.083876  992344 ssh_runner.go:195] Run: openssl version
	I0314 19:25:00.091083  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:00.104196  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110057  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.110152  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:00.117863  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:00.130915  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:00.144184  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149849  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.149905  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:00.156267  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:00.168884  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:00.181228  992344 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186741  992344 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.186815  992344 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:00.193408  992344 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:00.206565  992344 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:00.211955  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:00.218803  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:00.226004  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:00.233071  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:00.239998  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:00.246935  992344 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:00.253650  992344 kubeadm.go:391] StartCluster: {Name:old-k8s-version-968094 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:true NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v
1.20.0 ClusterName:old-k8s-version-968094 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.72.211 Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[dashboard:true] CustomAddonImages:map[MetricsScraper:registry.k8s.io/echoserver:1.4 MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false
MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:00.253770  992344 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:00.253810  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.296620  992344 cri.go:89] found id: ""
	I0314 19:25:00.296698  992344 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:00.308438  992344 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:00.308468  992344 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:00.308474  992344 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:00.308525  992344 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:00.319200  992344 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:00.320258  992344 kubeconfig.go:47] verify endpoint returned: get endpoint: "old-k8s-version-968094" does not appear in /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:25:00.320949  992344 kubeconfig.go:62] /home/jenkins/minikube-integration/18384-942544/kubeconfig needs updating (will repair): [kubeconfig missing "old-k8s-version-968094" cluster setting kubeconfig missing "old-k8s-version-968094" context setting]
	I0314 19:25:00.321954  992344 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:00.323826  992344 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:00.334959  992344 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.72.211
	I0314 19:25:00.334999  992344 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:00.335015  992344 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:00.335094  992344 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:00.382418  992344 cri.go:89] found id: ""
	I0314 19:25:00.382504  992344 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:00.400714  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:00.411916  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:00.411941  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:00.412000  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:00.421737  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:00.421786  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:00.431760  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:00.441154  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:00.441196  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:00.450820  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.460234  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:00.460286  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:00.470870  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:00.480352  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:00.480410  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:00.490282  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:00.500774  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:00.627719  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.640607  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.012840431s)
	I0314 19:25:01.640641  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:01.916817  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.028420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:02.119081  992344 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:02.119190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.619675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.119328  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:03.620344  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:02.761090  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761657  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:02.761739  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:02.761611  993275 retry.go:31] will retry after 3.350017146s: waiting for machine to come up
	I0314 19:25:06.113637  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114139  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:06.114178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:06.114067  993275 retry.go:31] will retry after 2.99017798s: waiting for machine to come up
	I0314 19:25:03.407892  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:05.907001  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:04.120088  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:04.619514  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.119530  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:05.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.119991  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:06.619382  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.119301  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:07.620072  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.119582  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:08.619828  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.105563  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106118  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | unable to find current IP address of domain default-k8s-diff-port-440341 in network mk-default-k8s-diff-port-440341
	I0314 19:25:09.106171  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | I0314 19:25:09.105987  993275 retry.go:31] will retry after 5.42931998s: waiting for machine to come up
	I0314 19:25:08.406736  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:10.906160  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:09.119659  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:09.619483  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.119624  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:10.619745  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.120056  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:11.619647  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.120231  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:12.619400  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.120340  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:13.620046  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.061441  991880 start.go:364] duration metric: took 1m4.63836278s to acquireMachinesLock for "no-preload-731976"
	I0314 19:25:16.061504  991880 start.go:96] Skipping create...Using existing machine configuration
	I0314 19:25:16.061513  991880 fix.go:54] fixHost starting: 
	I0314 19:25:16.061978  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:25:16.062021  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:25:16.079752  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39401
	I0314 19:25:16.080283  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:25:16.080930  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:25:16.080964  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:25:16.081279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:25:16.081477  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:16.081630  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:25:16.083170  991880 fix.go:112] recreateIfNeeded on no-preload-731976: state=Stopped err=<nil>
	I0314 19:25:16.083196  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	W0314 19:25:16.083368  991880 fix.go:138] unexpected machine state, will restart: <nil>
	I0314 19:25:16.085486  991880 out.go:177] * Restarting existing kvm2 VM for "no-preload-731976" ...
	I0314 19:25:14.539116  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539618  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has current primary IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.539639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Found IP for machine: 192.168.61.88
	I0314 19:25:14.539650  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserving static IP address...
	I0314 19:25:14.540057  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.540081  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Reserved static IP address: 192.168.61.88
	I0314 19:25:14.540105  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | skip adding static IP to network mk-default-k8s-diff-port-440341 - found existing host DHCP lease matching {name: "default-k8s-diff-port-440341", mac: "52:54:00:39:02:6d", ip: "192.168.61.88"}
	I0314 19:25:14.540126  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Getting to WaitForSSH function...
	I0314 19:25:14.540172  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Waiting for SSH to be available...
	I0314 19:25:14.542249  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542558  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.542594  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.542722  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH client type: external
	I0314 19:25:14.542755  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa (-rw-------)
	I0314 19:25:14.542793  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.61.88 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:14.542810  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | About to run SSH command:
	I0314 19:25:14.542841  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | exit 0
	I0314 19:25:14.668392  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:14.668820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetConfigRaw
	I0314 19:25:14.669583  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:14.672181  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672581  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.672622  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.672861  992563 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/config.json ...
	I0314 19:25:14.673049  992563 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:14.673069  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:14.673317  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.675826  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676173  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.676204  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.676383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.676547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676702  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.676820  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.676969  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.677212  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.677229  992563 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:14.780979  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:14.781005  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781243  992563 buildroot.go:166] provisioning hostname "default-k8s-diff-port-440341"
	I0314 19:25:14.781272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:14.781508  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.784454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.784868  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.784897  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.785044  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.785241  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785410  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.785545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.785731  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.786010  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.786038  992563 main.go:141] libmachine: About to run SSH command:
	sudo hostname default-k8s-diff-port-440341 && echo "default-k8s-diff-port-440341" | sudo tee /etc/hostname
	I0314 19:25:14.904629  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: default-k8s-diff-port-440341
	
	I0314 19:25:14.904677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:14.907677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908043  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:14.908065  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:14.908308  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:14.908510  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908709  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:14.908895  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:14.909075  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:14.909242  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:14.909260  992563 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sdefault-k8s-diff-port-440341' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 default-k8s-diff-port-440341/g' /etc/hosts;
				else 
					echo '127.0.1.1 default-k8s-diff-port-440341' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:15.027592  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:15.027627  992563 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:15.027663  992563 buildroot.go:174] setting up certificates
	I0314 19:25:15.027676  992563 provision.go:84] configureAuth start
	I0314 19:25:15.027686  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetMachineName
	I0314 19:25:15.027992  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:15.031259  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031691  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.031723  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.031839  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.034341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034690  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.034727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.034882  992563 provision.go:143] copyHostCerts
	I0314 19:25:15.034957  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:15.034974  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:15.035032  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:15.035117  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:15.035126  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:15.035150  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:15.035219  992563 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:15.035240  992563 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:15.035276  992563 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:15.035368  992563 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.default-k8s-diff-port-440341 san=[127.0.0.1 192.168.61.88 default-k8s-diff-port-440341 localhost minikube]
	I0314 19:25:15.366505  992563 provision.go:177] copyRemoteCerts
	I0314 19:25:15.366572  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:15.366601  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.369547  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.369931  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.369968  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.370178  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.370389  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.370559  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.370668  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.451879  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:15.479025  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1249 bytes)
	I0314 19:25:15.505498  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:15.531616  992563 provision.go:87] duration metric: took 503.926667ms to configureAuth
	I0314 19:25:15.531643  992563 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:15.531808  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:25:15.531887  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.534449  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534774  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.534805  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.534957  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.535182  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535344  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.535479  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.535660  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.535863  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.535895  992563 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:15.820304  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:15.820329  992563 machine.go:97] duration metric: took 1.147267075s to provisionDockerMachine
	I0314 19:25:15.820361  992563 start.go:293] postStartSetup for "default-k8s-diff-port-440341" (driver="kvm2")
	I0314 19:25:15.820373  992563 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:15.820407  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:15.820799  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:15.820845  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.823575  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.823941  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.823987  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.824114  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.824357  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.824550  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.824671  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:15.908341  992563 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:15.913846  992563 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:15.913876  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:15.913955  992563 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:15.914034  992563 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:15.914122  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:15.925105  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:15.954205  992563 start.go:296] duration metric: took 133.827027ms for postStartSetup
	I0314 19:25:15.954258  992563 fix.go:56] duration metric: took 24.772420326s for fixHost
	I0314 19:25:15.954282  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:15.957262  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957609  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:15.957635  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:15.957844  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:15.958095  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958272  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:15.958454  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:15.958685  992563 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:15.958877  992563 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.61.88 22 <nil> <nil>}
	I0314 19:25:15.958890  992563 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:16.061284  992563 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444316.007193080
	
	I0314 19:25:16.061311  992563 fix.go:216] guest clock: 1710444316.007193080
	I0314 19:25:16.061318  992563 fix.go:229] Guest: 2024-03-14 19:25:16.00719308 +0000 UTC Remote: 2024-03-14 19:25:15.954262263 +0000 UTC m=+249.360732976 (delta=52.930817ms)
	I0314 19:25:16.061337  992563 fix.go:200] guest clock delta is within tolerance: 52.930817ms
	I0314 19:25:16.061342  992563 start.go:83] releasing machines lock for "default-k8s-diff-port-440341", held for 24.879556185s
	I0314 19:25:16.061371  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.061696  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:16.064827  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065187  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.065222  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.065419  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.065929  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066138  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:25:16.066251  992563 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:16.066313  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.066422  992563 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:16.066451  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:25:16.069082  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069202  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069488  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069518  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069624  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:16.069659  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.069727  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:16.069881  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.069946  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:25:16.070091  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070106  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:25:16.070265  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.070283  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:25:16.070420  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:25:16.149620  992563 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:16.178081  992563 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:16.329236  992563 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:16.337073  992563 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:16.337165  992563 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:16.364829  992563 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:16.364860  992563 start.go:494] detecting cgroup driver to use...
	I0314 19:25:16.364950  992563 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:16.381277  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:16.396677  992563 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:16.396790  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:16.415438  992563 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:16.434001  992563 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:16.557750  992563 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:16.705623  992563 docker.go:233] disabling docker service ...
	I0314 19:25:16.705722  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:16.724795  992563 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:16.740336  992563 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:16.886850  992563 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:17.053349  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:17.069592  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:17.094552  992563 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:17.094625  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.110947  992563 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:17.111007  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.126320  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.146601  992563 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:17.159826  992563 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:17.173155  992563 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:17.184494  992563 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:17.184558  992563 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:17.208695  992563 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:17.227381  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:17.368355  992563 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:17.520886  992563 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:17.520974  992563 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:17.526580  992563 start.go:562] Will wait 60s for crictl version
	I0314 19:25:17.526628  992563 ssh_runner.go:195] Run: which crictl
	I0314 19:25:17.531219  992563 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:17.575983  992563 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:17.576094  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.609997  992563 ssh_runner.go:195] Run: crio --version
	I0314 19:25:17.649005  992563 out.go:177] * Preparing Kubernetes v1.28.4 on CRI-O 1.29.1 ...
	I0314 19:25:13.406397  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:15.407636  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:17.409791  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:14.119937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:14.619997  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.120018  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:15.620272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.119409  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.619421  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.120049  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:17.619392  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.120272  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:18.619832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:16.086761  991880 main.go:141] libmachine: (no-preload-731976) Calling .Start
	I0314 19:25:16.086939  991880 main.go:141] libmachine: (no-preload-731976) Ensuring networks are active...
	I0314 19:25:16.087657  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network default is active
	I0314 19:25:16.088038  991880 main.go:141] libmachine: (no-preload-731976) Ensuring network mk-no-preload-731976 is active
	I0314 19:25:16.088466  991880 main.go:141] libmachine: (no-preload-731976) Getting domain xml...
	I0314 19:25:16.089244  991880 main.go:141] libmachine: (no-preload-731976) Creating domain...
	I0314 19:25:17.372280  991880 main.go:141] libmachine: (no-preload-731976) Waiting to get IP...
	I0314 19:25:17.373197  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.373612  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.373682  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.373595  993471 retry.go:31] will retry after 247.546207ms: waiting for machine to come up
	I0314 19:25:17.622973  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.623491  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.623521  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.623426  993471 retry.go:31] will retry after 340.11253ms: waiting for machine to come up
	I0314 19:25:17.964912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:17.965367  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:17.965409  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:17.965326  993471 retry.go:31] will retry after 467.934923ms: waiting for machine to come up
	I0314 19:25:18.434872  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.435488  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.435532  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.435428  993471 retry.go:31] will retry after 407.906998ms: waiting for machine to come up
	I0314 19:25:18.845093  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:18.845593  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:18.845624  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:18.845538  993471 retry.go:31] will retry after 461.594471ms: waiting for machine to come up
	I0314 19:25:17.650252  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetIP
	I0314 19:25:17.653280  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653677  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:25:17.653706  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:25:17.653907  992563 ssh_runner.go:195] Run: grep 192.168.61.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:17.660311  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.61.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:17.676122  992563 kubeadm.go:877] updating cluster {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kubernete
sVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpirati
on:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:17.676277  992563 preload.go:132] Checking if preload exists for k8s version v1.28.4 and runtime crio
	I0314 19:25:17.676348  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:17.718920  992563 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.28.4". assuming images are not preloaded.
	I0314 19:25:17.718999  992563 ssh_runner.go:195] Run: which lz4
	I0314 19:25:17.724064  992563 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4
	I0314 19:25:17.729236  992563 ssh_runner.go:352] existence check for /preloaded.tar.lz4: stat -c "%!s(MISSING) %!y(MISSING)" /preloaded.tar.lz4: Process exited with status 1
	stdout:
	
	stderr:
	stat: cannot statx '/preloaded.tar.lz4': No such file or directory
	I0314 19:25:17.729268  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.28.4-cri-o-overlay-amd64.tar.lz4 --> /preloaded.tar.lz4 (458073571 bytes)
	I0314 19:25:19.779405  992563 crio.go:444] duration metric: took 2.055391829s to copy over tarball
	I0314 19:25:19.779494  992563 ssh_runner.go:195] Run: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4
	I0314 19:25:19.411247  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:21.911525  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:19.120147  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.619419  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.119333  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:20.620029  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.119402  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:21.620236  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.119692  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:22.619383  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.120125  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:23.620104  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:19.309335  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.309861  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.309892  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.309812  993471 retry.go:31] will retry after 629.96532ms: waiting for machine to come up
	I0314 19:25:19.941554  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:19.942052  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:19.942086  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:19.942018  993471 retry.go:31] will retry after 1.025753706s: waiting for machine to come up
	I0314 19:25:20.969178  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:20.969734  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:20.969775  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:20.969671  993471 retry.go:31] will retry after 1.02702661s: waiting for machine to come up
	I0314 19:25:21.998485  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:21.999019  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:21.999054  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:21.998955  993471 retry.go:31] will retry after 1.463514327s: waiting for machine to come up
	I0314 19:25:23.464556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:23.465087  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:23.465123  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:23.465035  993471 retry.go:31] will retry after 2.155372334s: waiting for machine to come up
	I0314 19:25:22.861284  992563 ssh_runner.go:235] Completed: sudo tar --xattrs --xattrs-include security.capability -I lz4 -C /var -xf /preloaded.tar.lz4: (3.081750952s)
	I0314 19:25:22.861324  992563 crio.go:451] duration metric: took 3.081885026s to extract the tarball
	I0314 19:25:22.861335  992563 ssh_runner.go:146] rm: /preloaded.tar.lz4
	I0314 19:25:22.907763  992563 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:22.962568  992563 crio.go:496] all images are preloaded for cri-o runtime.
	I0314 19:25:22.962593  992563 cache_images.go:84] Images are preloaded, skipping loading
	I0314 19:25:22.962602  992563 kubeadm.go:928] updating node { 192.168.61.88 8444 v1.28.4 crio true true} ...
	I0314 19:25:22.962756  992563 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.28.4/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=default-k8s-diff-port-440341 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.61.88
	
	[Install]
	 config:
	{KubernetesVersion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:22.962851  992563 ssh_runner.go:195] Run: crio config
	I0314 19:25:23.020057  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:23.020092  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:23.020109  992563 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:23.020150  992563 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.61.88 APIServerPort:8444 KubernetesVersion:v1.28.4 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:default-k8s-diff-port-440341 NodeName:default-k8s-diff-port-440341 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.61.88"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.61.88 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/
ca.crt StaticPodPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:23.020354  992563 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.61.88
	  bindPort: 8444
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "default-k8s-diff-port-440341"
	  kubeletExtraArgs:
	    node-ip: 192.168.61.88
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.61.88"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8444
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.28.4
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:23.020441  992563 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.28.4
	I0314 19:25:23.031259  992563 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:23.031351  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:23.041703  992563 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (327 bytes)
	I0314 19:25:23.061055  992563 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (352 bytes)
	I0314 19:25:23.084905  992563 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2169 bytes)
	I0314 19:25:23.108282  992563 ssh_runner.go:195] Run: grep 192.168.61.88	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:23.114097  992563 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.61.88	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:23.134147  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:23.261318  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:23.280454  992563 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341 for IP: 192.168.61.88
	I0314 19:25:23.280483  992563 certs.go:194] generating shared ca certs ...
	I0314 19:25:23.280506  992563 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:23.280675  992563 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:23.280739  992563 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:23.280753  992563 certs.go:256] generating profile certs ...
	I0314 19:25:23.280872  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.key
	I0314 19:25:23.280971  992563 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key.a3c32cf7
	I0314 19:25:23.281038  992563 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key
	I0314 19:25:23.281177  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:23.281219  992563 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:23.281232  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:23.281268  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:23.281300  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:23.281333  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:23.281389  992563 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:23.282304  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:23.351284  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:23.402835  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:23.435934  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:23.467188  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1440 bytes)
	I0314 19:25:23.499760  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:23.528544  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:23.556740  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:23.584404  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:23.615693  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:23.643349  992563 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:23.671793  992563 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:23.692766  992563 ssh_runner.go:195] Run: openssl version
	I0314 19:25:23.699459  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:23.711735  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717022  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.717078  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:23.723658  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:23.735141  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:23.746833  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753783  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.753855  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:23.760817  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:23.772826  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:23.784241  992563 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789107  992563 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.789170  992563 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:23.795406  992563 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:23.806969  992563 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:23.811875  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:23.818337  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:23.826885  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:23.835278  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:23.843419  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:23.851515  992563 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:23.860074  992563 kubeadm.go:391] StartCluster: {Name:default-k8s-diff-port-440341 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8444 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVe
rsion:v1.28.4 ClusterName:default-k8s-diff-port-440341 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:
26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:23.860169  992563 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:23.860241  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.902985  992563 cri.go:89] found id: ""
	I0314 19:25:23.903065  992563 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:23.915686  992563 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:23.915711  992563 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:23.915718  992563 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:23.915776  992563 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:23.926246  992563 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:23.927336  992563 kubeconfig.go:125] found "default-k8s-diff-port-440341" server: "https://192.168.61.88:8444"
	I0314 19:25:23.929693  992563 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:23.940022  992563 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.61.88
	I0314 19:25:23.940053  992563 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:23.940067  992563 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:23.940135  992563 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:23.982828  992563 cri.go:89] found id: ""
	I0314 19:25:23.982911  992563 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:24.001146  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:24.014973  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:24.015016  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:24.015069  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:25:24.024883  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:24.024954  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:24.034932  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:25:24.044680  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:24.044737  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:24.054865  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.064375  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:24.064440  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:24.075503  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:25:24.085139  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:24.085181  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:24.096092  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:24.106907  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.238605  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:24.990111  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.246192  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.325019  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:25.458340  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:25.458512  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.959178  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.459441  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.543678  992563 api_server.go:72] duration metric: took 1.085336822s to wait for apiserver process to appear ...
	I0314 19:25:26.543708  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:26.543734  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:26.544332  992563 api_server.go:269] stopped: https://192.168.61.88:8444/healthz: Get "https://192.168.61.88:8444/healthz": dial tcp 192.168.61.88:8444: connect: connection refused
	I0314 19:25:24.407953  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:26.408497  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:24.119417  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:24.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.120173  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.619362  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.119366  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:26.619644  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.119516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:27.619418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.120115  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:28.619593  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:25.621639  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:25.622189  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:25.622224  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:25.622129  993471 retry.go:31] will retry after 2.47317901s: waiting for machine to come up
	I0314 19:25:28.097250  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:28.097610  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:28.097640  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:28.097554  993471 retry.go:31] will retry after 2.923437953s: waiting for machine to come up
	I0314 19:25:27.044437  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.729256  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.729296  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:29.729321  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:29.752124  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:25:29.752162  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:25:30.044560  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.049804  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.049846  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:30.544454  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:30.558197  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:30.558237  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.043868  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.050468  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:25:31.050497  992563 api_server.go:103] status: https://192.168.61.88:8444/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[+]poststarthook/scheduling/bootstrap-system-priority-classes ok
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-deprecated-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:25:31.544657  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:25:31.549640  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:25:31.561049  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:25:31.561080  992563 api_server.go:131] duration metric: took 5.017362991s to wait for apiserver health ...
	I0314 19:25:31.561091  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:25:31.561101  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:31.563012  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:31.564434  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:25:31.594766  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:25:31.618252  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:25:31.632693  992563 system_pods.go:59] 8 kube-system pods found
	I0314 19:25:31.632743  992563 system_pods.go:61] "coredns-5dd5756b68-bkfks" [c4bc8ea9-9a0f-43df-9916-a9a7e42fc4e6] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:25:31.632752  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [936bfbcb-333a-45db-9cd1-b152c14bc623] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:25:31.632758  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [0533b8e8-e66a-4f38-8d55-c813446a4406] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:25:31.632768  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [b31998b9-b575-430b-918e-b9c4a7c626d9] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:25:31.632773  992563 system_pods.go:61] "kube-proxy-249fd" [f8bafea7-bc78-4e48-ad55-3b913c3e2fd1] Running
	I0314 19:25:31.632778  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [99c2fc5a-61a5-4813-9042-dac771932708] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:25:31.632786  992563 system_pods.go:61] "metrics-server-57f55c9bc5-t2hhv" [03b6608b-bea1-4605-b85d-c09f2c744118] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:25:31.632801  992563 system_pods.go:61] "storage-provisioner" [ec6d3122-f9c6-4f14-bc66-7cab18b88fa5] Running
	I0314 19:25:31.632811  992563 system_pods.go:74] duration metric: took 14.536847ms to wait for pod list to return data ...
	I0314 19:25:31.632818  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:25:31.636580  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:25:31.636606  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:25:31.636618  992563 node_conditions.go:105] duration metric: took 3.793367ms to run NodePressure ...
	I0314 19:25:31.636635  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:28.907100  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:30.908031  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:29.119861  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:29.620287  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.120113  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:30.619452  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.120315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.619667  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.120221  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:32.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.120292  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:33.619449  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:31.022404  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:31.022914  991880 main.go:141] libmachine: (no-preload-731976) DBG | unable to find current IP address of domain no-preload-731976 in network mk-no-preload-731976
	I0314 19:25:31.022950  991880 main.go:141] libmachine: (no-preload-731976) DBG | I0314 19:25:31.022850  993471 retry.go:31] will retry after 4.138449888s: waiting for machine to come up
	I0314 19:25:31.874889  992563 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879729  992563 kubeadm.go:733] kubelet initialised
	I0314 19:25:31.879757  992563 kubeadm.go:734] duration metric: took 4.834353ms waiting for restarted kubelet to initialise ...
	I0314 19:25:31.879768  992563 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:25:31.884949  992563 pod_ready.go:78] waiting up to 4m0s for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.890443  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890467  992563 pod_ready.go:81] duration metric: took 5.495766ms for pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.890475  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "coredns-5dd5756b68-bkfks" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.890485  992563 pod_ready.go:78] waiting up to 4m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.895241  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895275  992563 pod_ready.go:81] duration metric: took 4.778217ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.895289  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.895300  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:31.900184  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900204  992563 pod_ready.go:81] duration metric: took 4.895049ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:31.900222  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:31.900228  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.023193  992563 pod_ready.go:97] node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023224  992563 pod_ready.go:81] duration metric: took 122.987086ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	E0314 19:25:32.023236  992563 pod_ready.go:66] WaitExtra: waitPodCondition: node "default-k8s-diff-port-440341" hosting pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace is currently not "Ready" (skipping!): node "default-k8s-diff-port-440341" has status "Ready":"False"
	I0314 19:25:32.023242  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423939  992563 pod_ready.go:92] pod "kube-proxy-249fd" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:32.423972  992563 pod_ready.go:81] duration metric: took 400.720648ms for pod "kube-proxy-249fd" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:32.423988  992563 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:34.431140  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:36.432871  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:33.408652  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.906834  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:37.914444  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:35.165792  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166342  991880 main.go:141] libmachine: (no-preload-731976) Found IP for machine: 192.168.39.148
	I0314 19:25:35.166372  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has current primary IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.166382  991880 main.go:141] libmachine: (no-preload-731976) Reserving static IP address...
	I0314 19:25:35.166707  991880 main.go:141] libmachine: (no-preload-731976) Reserved static IP address: 192.168.39.148
	I0314 19:25:35.166727  991880 main.go:141] libmachine: (no-preload-731976) Waiting for SSH to be available...
	I0314 19:25:35.166748  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.166781  991880 main.go:141] libmachine: (no-preload-731976) DBG | skip adding static IP to network mk-no-preload-731976 - found existing host DHCP lease matching {name: "no-preload-731976", mac: "52:54:00:57:0e:67", ip: "192.168.39.148"}
	I0314 19:25:35.166800  991880 main.go:141] libmachine: (no-preload-731976) DBG | Getting to WaitForSSH function...
	I0314 19:25:35.169377  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169760  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.169795  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.169926  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH client type: external
	I0314 19:25:35.169960  991880 main.go:141] libmachine: (no-preload-731976) DBG | Using SSH private key: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa (-rw-------)
	I0314 19:25:35.169998  991880 main.go:141] libmachine: (no-preload-731976) DBG | &{[-F /dev/null -o ConnectionAttempts=3 -o ConnectTimeout=10 -o ControlMaster=no -o ControlPath=none -o LogLevel=quiet -o PasswordAuthentication=no -o ServerAliveInterval=60 -o StrictHostKeyChecking=no -o UserKnownHostsFile=/dev/null docker@192.168.39.148 -o IdentitiesOnly=yes -i /home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa -p 22] /usr/bin/ssh <nil>}
	I0314 19:25:35.170022  991880 main.go:141] libmachine: (no-preload-731976) DBG | About to run SSH command:
	I0314 19:25:35.170036  991880 main.go:141] libmachine: (no-preload-731976) DBG | exit 0
	I0314 19:25:35.296417  991880 main.go:141] libmachine: (no-preload-731976) DBG | SSH cmd err, output: <nil>: 
	I0314 19:25:35.296801  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetConfigRaw
	I0314 19:25:35.297596  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.300253  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300720  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.300757  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.300996  991880 profile.go:142] Saving config to /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/config.json ...
	I0314 19:25:35.301205  991880 machine.go:94] provisionDockerMachine start ...
	I0314 19:25:35.301229  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:35.301493  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.304165  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304600  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.304643  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.304850  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.305119  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305292  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.305468  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.305627  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.305863  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.305881  991880 main.go:141] libmachine: About to run SSH command:
	hostname
	I0314 19:25:35.421933  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: minikube
	
	I0314 19:25:35.421969  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422269  991880 buildroot.go:166] provisioning hostname "no-preload-731976"
	I0314 19:25:35.422303  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.422516  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.425530  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426039  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.426069  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.426265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.426476  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426646  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.426807  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.426997  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.427179  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.427200  991880 main.go:141] libmachine: About to run SSH command:
	sudo hostname no-preload-731976 && echo "no-preload-731976" | sudo tee /etc/hostname
	I0314 19:25:35.558170  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: no-preload-731976
	
	I0314 19:25:35.558216  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.561575  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562028  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.562059  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.562372  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.562673  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.562874  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.563059  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.563234  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:35.563468  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:35.563495  991880 main.go:141] libmachine: About to run SSH command:
	
			if ! grep -xq '.*\sno-preload-731976' /etc/hosts; then
				if grep -xq '127.0.1.1\s.*' /etc/hosts; then
					sudo sed -i 's/^127.0.1.1\s.*/127.0.1.1 no-preload-731976/g' /etc/hosts;
				else 
					echo '127.0.1.1 no-preload-731976' | sudo tee -a /etc/hosts; 
				fi
			fi
	I0314 19:25:35.691282  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	I0314 19:25:35.691321  991880 buildroot.go:172] set auth options {CertDir:/home/jenkins/minikube-integration/18384-942544/.minikube CaCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem CaPrivateKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem CaCertRemotePath:/etc/docker/ca.pem ServerCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ServerKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem ClientKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem ServerCertRemotePath:/etc/docker/server.pem ServerKeyRemotePath:/etc/docker/server-key.pem ClientCertPath:/home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem ServerCertSANs:[] StorePath:/home/jenkins/minikube-integration/18384-942544/.minikube}
	I0314 19:25:35.691412  991880 buildroot.go:174] setting up certificates
	I0314 19:25:35.691437  991880 provision.go:84] configureAuth start
	I0314 19:25:35.691454  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetMachineName
	I0314 19:25:35.691821  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:35.694807  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695223  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.695255  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.695385  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.698118  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698519  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.698548  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.698752  991880 provision.go:143] copyHostCerts
	I0314 19:25:35.698834  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem, removing ...
	I0314 19:25:35.698872  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem
	I0314 19:25:35.698922  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/ca.pem (1082 bytes)
	I0314 19:25:35.699019  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem, removing ...
	I0314 19:25:35.699030  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem
	I0314 19:25:35.699051  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/cert.pem (1123 bytes)
	I0314 19:25:35.699114  991880 exec_runner.go:144] found /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem, removing ...
	I0314 19:25:35.699156  991880 exec_runner.go:203] rm: /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem
	I0314 19:25:35.699177  991880 exec_runner.go:151] cp: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem --> /home/jenkins/minikube-integration/18384-942544/.minikube/key.pem (1675 bytes)
	I0314 19:25:35.699240  991880 provision.go:117] generating server cert: /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem ca-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem private-key=/home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem org=jenkins.no-preload-731976 san=[127.0.0.1 192.168.39.148 localhost minikube no-preload-731976]
	I0314 19:25:35.915177  991880 provision.go:177] copyRemoteCerts
	I0314 19:25:35.915240  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/docker /etc/docker /etc/docker
	I0314 19:25:35.915265  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:35.918112  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918468  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:35.918499  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:35.918607  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:35.918813  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:35.918989  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:35.919161  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.003712  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server.pem --> /etc/docker/server.pem (1220 bytes)
	I0314 19:25:36.037023  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/machines/server-key.pem --> /etc/docker/server-key.pem (1679 bytes)
	I0314 19:25:36.068063  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem --> /etc/docker/ca.pem (1082 bytes)
	I0314 19:25:36.101448  991880 provision.go:87] duration metric: took 409.997228ms to configureAuth
	I0314 19:25:36.101475  991880 buildroot.go:189] setting minikube options for container-runtime
	I0314 19:25:36.101691  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:25:36.101783  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.104700  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.105138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.105310  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.105536  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105733  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.105885  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.106088  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.106325  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.106345  991880 main.go:141] libmachine: About to run SSH command:
	sudo mkdir -p /etc/sysconfig && printf %!s(MISSING) "
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	" | sudo tee /etc/sysconfig/crio.minikube && sudo systemctl restart crio
	I0314 19:25:36.387809  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 
	CRIO_MINIKUBE_OPTIONS='--insecure-registry 10.96.0.0/12 '
	
	I0314 19:25:36.387841  991880 machine.go:97] duration metric: took 1.086620225s to provisionDockerMachine
	I0314 19:25:36.387855  991880 start.go:293] postStartSetup for "no-preload-731976" (driver="kvm2")
	I0314 19:25:36.387869  991880 start.go:322] creating required directories: [/etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs]
	I0314 19:25:36.387886  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.388286  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/kubernetes/addons /etc/kubernetes/manifests /var/tmp/minikube /var/lib/minikube /var/lib/minikube/certs /var/lib/minikube/images /var/lib/minikube/binaries /tmp/gvisor /usr/share/ca-certificates /etc/ssl/certs
	I0314 19:25:36.388316  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.391292  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391742  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.391774  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.391959  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.392203  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.392450  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.392637  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.477050  991880 ssh_runner.go:195] Run: cat /etc/os-release
	I0314 19:25:36.482184  991880 info.go:137] Remote host: Buildroot 2023.02.9
	I0314 19:25:36.482205  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/addons for local assets ...
	I0314 19:25:36.482270  991880 filesync.go:126] Scanning /home/jenkins/minikube-integration/18384-942544/.minikube/files for local assets ...
	I0314 19:25:36.482372  991880 filesync.go:149] local asset: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem -> 9513112.pem in /etc/ssl/certs
	I0314 19:25:36.482459  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/ssl/certs
	I0314 19:25:36.492716  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:36.520655  991880 start.go:296] duration metric: took 132.783495ms for postStartSetup
	I0314 19:25:36.520723  991880 fix.go:56] duration metric: took 20.459188473s for fixHost
	I0314 19:25:36.520761  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.523718  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524107  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.524138  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.524431  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.524648  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.524842  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.525031  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.525211  991880 main.go:141] libmachine: Using SSH client type: native
	I0314 19:25:36.525425  991880 main.go:141] libmachine: &{{{<nil> 0 [] [] []} docker [0x82d4a0] 0x830200 <nil>  [] 0s} 192.168.39.148 22 <nil> <nil>}
	I0314 19:25:36.525436  991880 main.go:141] libmachine: About to run SSH command:
	date +%!s(MISSING).%!N(MISSING)
	I0314 19:25:36.633356  991880 main.go:141] libmachine: SSH cmd err, output: <nil>: 1710444336.610892497
	
	I0314 19:25:36.633389  991880 fix.go:216] guest clock: 1710444336.610892497
	I0314 19:25:36.633400  991880 fix.go:229] Guest: 2024-03-14 19:25:36.610892497 +0000 UTC Remote: 2024-03-14 19:25:36.520738659 +0000 UTC m=+367.687364006 (delta=90.153838ms)
	I0314 19:25:36.633445  991880 fix.go:200] guest clock delta is within tolerance: 90.153838ms
	I0314 19:25:36.633457  991880 start.go:83] releasing machines lock for "no-preload-731976", held for 20.57197992s
	I0314 19:25:36.633490  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.633778  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:36.636556  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.636959  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.636991  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.637190  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637708  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637871  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:25:36.637934  991880 ssh_runner.go:195] Run: curl -sS -m 2 https://registry.k8s.io/
	I0314 19:25:36.638009  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.638075  991880 ssh_runner.go:195] Run: cat /version.json
	I0314 19:25:36.638104  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:25:36.640821  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641207  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641236  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641272  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641489  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.641668  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.641789  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:36.641863  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:36.641961  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.642002  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:25:36.642188  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:25:36.642394  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:25:36.642606  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:25:36.753962  991880 ssh_runner.go:195] Run: systemctl --version
	I0314 19:25:36.761020  991880 ssh_runner.go:195] Run: sudo sh -c "podman version >/dev/null"
	I0314 19:25:36.916046  991880 ssh_runner.go:195] Run: sh -c "stat /etc/cni/net.d/*loopback.conf*"
	W0314 19:25:36.923607  991880 cni.go:209] loopback cni configuration skipped: "/etc/cni/net.d/*loopback.conf*" not found
	I0314 19:25:36.923688  991880 ssh_runner.go:195] Run: sudo find /etc/cni/net.d -maxdepth 1 -type f ( ( -name *bridge* -or -name *podman* ) -and -not -name *.mk_disabled ) -printf "%!p(MISSING), " -exec sh -c "sudo mv {} {}.mk_disabled" ;
	I0314 19:25:36.941685  991880 cni.go:262] disabled [/etc/cni/net.d/87-podman-bridge.conflist] bridge cni config(s)
	I0314 19:25:36.941710  991880 start.go:494] detecting cgroup driver to use...
	I0314 19:25:36.941776  991880 ssh_runner.go:195] Run: sudo systemctl stop -f containerd
	I0314 19:25:36.962019  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service containerd
	I0314 19:25:36.977917  991880 docker.go:217] disabling cri-docker service (if available) ...
	I0314 19:25:36.977982  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.socket
	I0314 19:25:36.995378  991880 ssh_runner.go:195] Run: sudo systemctl stop -f cri-docker.service
	I0314 19:25:37.010859  991880 ssh_runner.go:195] Run: sudo systemctl disable cri-docker.socket
	I0314 19:25:37.145828  991880 ssh_runner.go:195] Run: sudo systemctl mask cri-docker.service
	I0314 19:25:37.310805  991880 docker.go:233] disabling docker service ...
	I0314 19:25:37.310893  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.socket
	I0314 19:25:37.327346  991880 ssh_runner.go:195] Run: sudo systemctl stop -f docker.service
	I0314 19:25:37.342143  991880 ssh_runner.go:195] Run: sudo systemctl disable docker.socket
	I0314 19:25:37.485925  991880 ssh_runner.go:195] Run: sudo systemctl mask docker.service
	I0314 19:25:37.607814  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service docker
	I0314 19:25:37.623068  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo mkdir -p /etc && printf %!s(MISSING) "runtime-endpoint: unix:///var/run/crio/crio.sock
	" | sudo tee /etc/crictl.yaml"
	I0314 19:25:37.644387  991880 crio.go:59] configure cri-o to use "registry.k8s.io/pause:3.9" pause image...
	I0314 19:25:37.644455  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*pause_image = .*$|pause_image = "registry.k8s.io/pause:3.9"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.655919  991880 crio.go:70] configuring cri-o to use "cgroupfs" as cgroup driver...
	I0314 19:25:37.655992  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i 's|^.*cgroup_manager = .*$|cgroup_manager = "cgroupfs"|' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.669290  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/conmon_cgroup = .*/d' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.681601  991880 ssh_runner.go:195] Run: sh -c "sudo sed -i '/cgroup_manager = .*/a conmon_cgroup = "pod"' /etc/crio/crio.conf.d/02-crio.conf"
	I0314 19:25:37.694022  991880 ssh_runner.go:195] Run: sh -c "sudo rm -rf /etc/cni/net.mk"
	I0314 19:25:37.705793  991880 ssh_runner.go:195] Run: sudo sysctl net.bridge.bridge-nf-call-iptables
	I0314 19:25:37.716260  991880 crio.go:148] couldn't verify netfilter by "sudo sysctl net.bridge.bridge-nf-call-iptables" which might be okay. error: sudo sysctl net.bridge.bridge-nf-call-iptables: Process exited with status 255
	stdout:
	
	stderr:
	sysctl: cannot stat /proc/sys/net/bridge/bridge-nf-call-iptables: No such file or directory
	I0314 19:25:37.716307  991880 ssh_runner.go:195] Run: sudo modprobe br_netfilter
	I0314 19:25:37.732112  991880 ssh_runner.go:195] Run: sudo sh -c "echo 1 > /proc/sys/net/ipv4/ip_forward"
	I0314 19:25:37.749555  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:37.868548  991880 ssh_runner.go:195] Run: sudo systemctl restart crio
	I0314 19:25:38.023735  991880 start.go:541] Will wait 60s for socket path /var/run/crio/crio.sock
	I0314 19:25:38.023821  991880 ssh_runner.go:195] Run: stat /var/run/crio/crio.sock
	I0314 19:25:38.029414  991880 start.go:562] Will wait 60s for crictl version
	I0314 19:25:38.029481  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.033985  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl version
	I0314 19:25:38.077012  991880 start.go:578] Version:  0.1.0
	RuntimeName:  cri-o
	RuntimeVersion:  1.29.1
	RuntimeApiVersion:  v1
	I0314 19:25:38.077102  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.109155  991880 ssh_runner.go:195] Run: crio --version
	I0314 19:25:38.146003  991880 out.go:177] * Preparing Kubernetes v1.29.0-rc.2 on CRI-O 1.29.1 ...
	I0314 19:25:34.119724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:34.620261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.119543  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:35.620151  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.119893  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:36.619442  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.119326  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:37.619427  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.119766  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.619711  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:38.147344  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetIP
	I0314 19:25:38.149841  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150180  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:25:38.150217  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:25:38.150608  991880 ssh_runner.go:195] Run: grep 192.168.39.1	host.minikube.internal$ /etc/hosts
	I0314 19:25:38.155598  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\thost.minikube.internal$' "/etc/hosts"; echo "192.168.39.1	host.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:38.172048  991880 kubeadm.go:877] updating cluster {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1
.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m
0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s} ...
	I0314 19:25:38.172187  991880 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 19:25:38.172260  991880 ssh_runner.go:195] Run: sudo crictl images --output json
	I0314 19:25:38.220190  991880 crio.go:492] couldn't find preloaded image for "registry.k8s.io/kube-apiserver:v1.29.0-rc.2". assuming images are not preloaded.
	I0314 19:25:38.220232  991880 cache_images.go:88] LoadCachedImages start: [registry.k8s.io/kube-apiserver:v1.29.0-rc.2 registry.k8s.io/kube-controller-manager:v1.29.0-rc.2 registry.k8s.io/kube-scheduler:v1.29.0-rc.2 registry.k8s.io/kube-proxy:v1.29.0-rc.2 registry.k8s.io/pause:3.9 registry.k8s.io/etcd:3.5.10-0 registry.k8s.io/coredns/coredns:v1.11.1 gcr.io/k8s-minikube/storage-provisioner:v5]
	I0314 19:25:38.220291  991880 image.go:134] retrieving image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.220313  991880 image.go:134] retrieving image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.220345  991880 image.go:134] retrieving image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.220378  991880 image.go:134] retrieving image: registry.k8s.io/pause:3.9
	I0314 19:25:38.220395  991880 image.go:134] retrieving image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.220486  991880 image.go:134] retrieving image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.220484  991880 image.go:134] retrieving image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.220724  991880 image.go:134] retrieving image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.221960  991880 image.go:177] daemon lookup for registry.k8s.io/pause:3.9: Error response from daemon: No such image: registry.k8s.io/pause:3.9
	I0314 19:25:38.222035  991880 image.go:177] daemon lookup for gcr.io/k8s-minikube/storage-provisioner:v5: Error response from daemon: No such image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.222177  991880 image.go:177] daemon lookup for registry.k8s.io/kube-apiserver:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.222230  991880 image.go:177] daemon lookup for registry.k8s.io/kube-proxy:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/coredns/coredns:v1.11.1: Error response from daemon: No such image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.222271  991880 image.go:177] daemon lookup for registry.k8s.io/kube-controller-manager:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.222272  991880 image.go:177] daemon lookup for registry.k8s.io/kube-scheduler:v1.29.0-rc.2: Error response from daemon: No such image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.222210  991880 image.go:177] daemon lookup for registry.k8s.io/etcd:3.5.10-0: Error response from daemon: No such image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.372514  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/pause:3.9
	I0314 19:25:38.384051  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.388330  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.395017  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.397902  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.409638  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.431681  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.501339  991880 ssh_runner.go:195] Run: sudo podman image inspect --format {{.Id}} gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.590670  991880 cache_images.go:116] "registry.k8s.io/coredns/coredns:v1.11.1" needs transfer: "registry.k8s.io/coredns/coredns:v1.11.1" does not exist at hash "cbb01a7bd410dc08ba382018ab909a674fb0e48687f0c00797ed5bc34fcc6bb4" in container runtime
	I0314 19:25:38.590775  991880 cache_images.go:116] "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-scheduler:v1.29.0-rc.2" does not exist at hash "4270645ed6b7a4160357898afaff490096bc6032724fb0bf786bf0077bd37210" in container runtime
	I0314 19:25:38.590838  991880 cache_images.go:116] "registry.k8s.io/etcd:3.5.10-0" needs transfer: "registry.k8s.io/etcd:3.5.10-0" does not exist at hash "a0eed15eed4498c145ef2f1883fcd300d7adbb759df73c901abd5383dda668e7" in container runtime
	I0314 19:25:38.590853  991880 cri.go:218] Removing image: registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.590860  991880 cri.go:218] Removing image: registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590906  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.590797  991880 cri.go:218] Removing image: registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.591036  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627618  991880 cache_images.go:116] "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-controller-manager:v1.29.0-rc.2" does not exist at hash "d4e01cdf639708bfec87fe34854ad206f444e1d58d34defcb56feedbf1d57d3d" in container runtime
	I0314 19:25:38.627667  991880 cri.go:218] Removing image: registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.627716  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.627732  991880 cache_images.go:116] "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-apiserver:v1.29.0-rc.2" does not exist at hash "bbb47a0f83324722f97533f4e7ed308c71fea14e14b2461a2091e1366b402a2f" in container runtime
	I0314 19:25:38.627769  991880 cri.go:218] Removing image: registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.627826  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648107  991880 cache_images.go:116] "registry.k8s.io/kube-proxy:v1.29.0-rc.2" needs transfer: "registry.k8s.io/kube-proxy:v1.29.0-rc.2" does not exist at hash "cc0a4f00aad7b5c96d0761b71161ecfa36338d1e4203c038c0edfbc38ce7b834" in container runtime
	I0314 19:25:38.648128  991880 cache_images.go:116] "gcr.io/k8s-minikube/storage-provisioner:v5" needs transfer: "gcr.io/k8s-minikube/storage-provisioner:v5" does not exist at hash "6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562" in container runtime
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.648152  991880 cri.go:218] Removing image: gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648197  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:25:38.648279  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/etcd:3.5.10-0
	I0314 19:25:38.648277  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-scheduler:v1.29.0-rc.2
	I0314 19:25:38.648335  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/coredns/coredns:v1.11.1
	I0314 19:25:38.648346  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-controller-manager:v1.29.0-rc.2
	I0314 19:25:38.648374  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-apiserver:v1.29.0-rc.2
	I0314 19:25:38.783957  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0
	I0314 19:25:38.784024  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:25:38.784071  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.784097  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.784197  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:38.788609  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788695  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:38.788719  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1
	I0314 19:25:38.788782  991880 ssh_runner.go:195] Run: sudo /usr/bin/crictl rmi registry.k8s.io/kube-proxy:v1.29.0-rc.2
	I0314 19:25:38.788797  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:38.788856  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.788931  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:38.849452  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5
	I0314 19:25:38.849488  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/etcd_3.5.10-0 (exists)
	I0314 19:25:38.849505  991880 crio.go:257] Loading image: /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849554  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0
	I0314 19:25:38.849563  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:38.849617  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/coredns_v1.11.1 (exists)
	I0314 19:25:38.849624  991880 cache_images.go:286] Loading image from: /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.849552  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849645  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849672  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2 (exists)
	I0314 19:25:38.849739  991880 ssh_runner.go:195] Run: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:38.854753  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/storage-provisioner_v5 (exists)
	I0314 19:25:38.933788  992563 pod_ready.go:102] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.435999  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:25:39.436021  992563 pod_ready.go:81] duration metric: took 7.012025071s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:39.436031  992563 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	I0314 19:25:41.445517  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:40.407508  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:42.907630  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:39.120157  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:39.620116  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.119693  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:40.620198  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.120192  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:41.619323  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.119637  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.619724  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:43.619799  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:42.952723  991880 ssh_runner.go:235] Completed: stat -c "%!s(MISSING) %!y(MISSING)" /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (4.102947708s)
	I0314 19:25:42.952761  991880 ssh_runner.go:356] copy: skipping /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2 (exists)
	I0314 19:25:42.952762  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/etcd_3.5.10-0: (4.103172862s)
	I0314 19:25:42.952791  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/etcd_3.5.10-0 from cache
	I0314 19:25:42.952821  991880 crio.go:257] Loading image: /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:42.952878  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1
	I0314 19:25:43.943582  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.945997  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:45.407780  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:44.119609  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:44.619260  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.119599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.619665  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.120008  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:46.619297  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.119435  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:47.619512  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.119521  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:48.619320  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:45.022375  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/coredns_v1.11.1: (2.069465444s)
	I0314 19:25:45.022413  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/coredns/coredns_v1.11.1 from cache
	I0314 19:25:45.022458  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:45.022539  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2
	I0314 19:25:48.091412  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-apiserver_v1.29.0-rc.2: (3.068839048s)
	I0314 19:25:48.091449  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-apiserver_v1.29.0-rc.2 from cache
	I0314 19:25:48.091482  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.091536  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2
	I0314 19:25:48.451322  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.944057  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:47.957506  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:50.408381  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:52.906494  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:49.120283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.619796  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.120279  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:50.619408  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.120076  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:51.619516  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.119566  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:52.620268  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.120329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:53.619847  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:49.657504  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-scheduler_v1.29.0-rc.2: (1.565934426s)
	I0314 19:25:49.657542  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-scheduler_v1.29.0-rc.2 from cache
	I0314 19:25:49.657578  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:49.657646  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2
	I0314 19:25:52.134720  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-controller-manager_v1.29.0-rc.2: (2.477038469s)
	I0314 19:25:52.134760  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-controller-manager_v1.29.0-rc.2 from cache
	I0314 19:25:52.134794  991880 crio.go:257] Loading image: /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:52.134888  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/storage-provisioner_v5
	I0314 19:25:53.095193  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/gcr.io/k8s-minikube/storage-provisioner_v5 from cache
	I0314 19:25:53.095258  991880 crio.go:257] Loading image: /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.095337  991880 ssh_runner.go:195] Run: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2
	I0314 19:25:53.447791  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:55.944276  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.907556  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:57.406376  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:54.119981  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.620180  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.119616  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:55.619375  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.119240  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:56.619922  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.120288  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:57.620190  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.119329  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.620315  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:54.949310  991880 ssh_runner.go:235] Completed: sudo podman load -i /var/lib/minikube/images/kube-proxy_v1.29.0-rc.2: (1.853945012s)
	I0314 19:25:54.949346  991880 cache_images.go:315] Transferred and loaded /home/jenkins/minikube-integration/18384-942544/.minikube/cache/images/amd64/registry.k8s.io/kube-proxy_v1.29.0-rc.2 from cache
	I0314 19:25:54.949374  991880 cache_images.go:123] Successfully loaded all cached images
	I0314 19:25:54.949385  991880 cache_images.go:92] duration metric: took 16.729134981s to LoadCachedImages
	I0314 19:25:54.949398  991880 kubeadm.go:928] updating node { 192.168.39.148 8443 v1.29.0-rc.2 crio true true} ...
	I0314 19:25:54.949542  991880 kubeadm.go:940] kubelet [Unit]
	Wants=crio.service
	
	[Service]
	ExecStart=
	ExecStart=/var/lib/minikube/binaries/v1.29.0-rc.2/kubelet --bootstrap-kubeconfig=/etc/kubernetes/bootstrap-kubelet.conf --config=/var/lib/kubelet/config.yaml --hostname-override=no-preload-731976 --kubeconfig=/etc/kubernetes/kubelet.conf --node-ip=192.168.39.148
	
	[Install]
	 config:
	{KubernetesVersion:v1.29.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:}
	I0314 19:25:54.949667  991880 ssh_runner.go:195] Run: crio config
	I0314 19:25:55.001838  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:25:55.001869  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:25:55.001885  991880 kubeadm.go:84] Using pod CIDR: 10.244.0.0/16
	I0314 19:25:55.001916  991880 kubeadm.go:181] kubeadm options: {CertDir:/var/lib/minikube/certs ServiceCIDR:10.96.0.0/12 PodSubnet:10.244.0.0/16 AdvertiseAddress:192.168.39.148 APIServerPort:8443 KubernetesVersion:v1.29.0-rc.2 EtcdDataDir:/var/lib/minikube/etcd EtcdExtraArgs:map[] ClusterName:no-preload-731976 NodeName:no-preload-731976 DNSDomain:cluster.local CRISocket:/var/run/crio/crio.sock ImageRepository: ComponentOptions:[{Component:apiServer ExtraArgs:map[enable-admission-plugins:NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota] Pairs:map[certSANs:["127.0.0.1", "localhost", "192.168.39.148"]]} {Component:controllerManager ExtraArgs:map[allocate-node-cidrs:true leader-elect:false] Pairs:map[]} {Component:scheduler ExtraArgs:map[leader-elect:false] Pairs:map[]}] FeatureArgs:map[] NodeIP:192.168.39.148 CgroupDriver:cgroupfs ClientCAFile:/var/lib/minikube/certs/ca.crt StaticP
odPath:/etc/kubernetes/manifests ControlPlaneAddress:control-plane.minikube.internal KubeProxyOptions:map[] ResolvConfSearchRegression:false KubeletConfigOpts:map[containerRuntimeEndpoint:unix:///var/run/crio/crio.sock hairpinMode:hairpin-veth runtimeRequestTimeout:15m] PrependCriSocketUnix:true}
	I0314 19:25:55.002121  991880 kubeadm.go:187] kubeadm config:
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: InitConfiguration
	localAPIEndpoint:
	  advertiseAddress: 192.168.39.148
	  bindPort: 8443
	bootstrapTokens:
	  - groups:
	      - system:bootstrappers:kubeadm:default-node-token
	    ttl: 24h0m0s
	    usages:
	      - signing
	      - authentication
	nodeRegistration:
	  criSocket: unix:///var/run/crio/crio.sock
	  name: "no-preload-731976"
	  kubeletExtraArgs:
	    node-ip: 192.168.39.148
	  taints: []
	---
	apiVersion: kubeadm.k8s.io/v1beta3
	kind: ClusterConfiguration
	apiServer:
	  certSANs: ["127.0.0.1", "localhost", "192.168.39.148"]
	  extraArgs:
	    enable-admission-plugins: "NamespaceLifecycle,LimitRanger,ServiceAccount,DefaultStorageClass,DefaultTolerationSeconds,NodeRestriction,MutatingAdmissionWebhook,ValidatingAdmissionWebhook,ResourceQuota"
	controllerManager:
	  extraArgs:
	    allocate-node-cidrs: "true"
	    leader-elect: "false"
	scheduler:
	  extraArgs:
	    leader-elect: "false"
	certificatesDir: /var/lib/minikube/certs
	clusterName: mk
	controlPlaneEndpoint: control-plane.minikube.internal:8443
	etcd:
	  local:
	    dataDir: /var/lib/minikube/etcd
	    extraArgs:
	      proxy-refresh-interval: "70000"
	kubernetesVersion: v1.29.0-rc.2
	networking:
	  dnsDomain: cluster.local
	  podSubnet: "10.244.0.0/16"
	  serviceSubnet: 10.96.0.0/12
	---
	apiVersion: kubelet.config.k8s.io/v1beta1
	kind: KubeletConfiguration
	authentication:
	  x509:
	    clientCAFile: /var/lib/minikube/certs/ca.crt
	cgroupDriver: cgroupfs
	containerRuntimeEndpoint: unix:///var/run/crio/crio.sock
	hairpinMode: hairpin-veth
	runtimeRequestTimeout: 15m
	clusterDomain: "cluster.local"
	# disable disk resource management by default
	imageGCHighThresholdPercent: 100
	evictionHard:
	  nodefs.available: "0%!"(MISSING)
	  nodefs.inodesFree: "0%!"(MISSING)
	  imagefs.available: "0%!"(MISSING)
	failSwapOn: false
	staticPodPath: /etc/kubernetes/manifests
	---
	apiVersion: kubeproxy.config.k8s.io/v1alpha1
	kind: KubeProxyConfiguration
	clusterCIDR: "10.244.0.0/16"
	metricsBindAddress: 0.0.0.0:10249
	conntrack:
	  maxPerCore: 0
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_established"
	  tcpEstablishedTimeout: 0s
	# Skip setting "net.netfilter.nf_conntrack_tcp_timeout_close"
	  tcpCloseWaitTimeout: 0s
	
	I0314 19:25:55.002212  991880 ssh_runner.go:195] Run: sudo ls /var/lib/minikube/binaries/v1.29.0-rc.2
	I0314 19:25:55.014769  991880 binaries.go:44] Found k8s binaries, skipping transfer
	I0314 19:25:55.014842  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/systemd/system/kubelet.service.d /lib/systemd/system /var/tmp/minikube
	I0314 19:25:55.026082  991880 ssh_runner.go:362] scp memory --> /etc/systemd/system/kubelet.service.d/10-kubeadm.conf (322 bytes)
	I0314 19:25:55.049071  991880 ssh_runner.go:362] scp memory --> /lib/systemd/system/kubelet.service (357 bytes)
	I0314 19:25:55.071131  991880 ssh_runner.go:362] scp memory --> /var/tmp/minikube/kubeadm.yaml.new (2166 bytes)
	I0314 19:25:55.093566  991880 ssh_runner.go:195] Run: grep 192.168.39.148	control-plane.minikube.internal$ /etc/hosts
	I0314 19:25:55.098332  991880 ssh_runner.go:195] Run: /bin/bash -c "{ grep -v $'\tcontrol-plane.minikube.internal$' "/etc/hosts"; echo "192.168.39.148	control-plane.minikube.internal"; } > /tmp/h.$$; sudo cp /tmp/h.$$ "/etc/hosts""
	I0314 19:25:55.113424  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:25:55.260159  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:25:55.283145  991880 certs.go:68] Setting up /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976 for IP: 192.168.39.148
	I0314 19:25:55.283174  991880 certs.go:194] generating shared ca certs ...
	I0314 19:25:55.283197  991880 certs.go:226] acquiring lock for ca certs: {Name:mk519b55811360e7e353529ea1812eea6fe7a085 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:25:55.283377  991880 certs.go:235] skipping valid "minikubeCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key
	I0314 19:25:55.283441  991880 certs.go:235] skipping valid "proxyClientCA" ca cert: /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key
	I0314 19:25:55.283455  991880 certs.go:256] generating profile certs ...
	I0314 19:25:55.283564  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube-user": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.key
	I0314 19:25:55.283661  991880 certs.go:359] skipping valid signed profile cert regeneration for "minikube": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key.5587cb42
	I0314 19:25:55.283720  991880 certs.go:359] skipping valid signed profile cert regeneration for "aggregator": /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key
	I0314 19:25:55.283895  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem (1338 bytes)
	W0314 19:25:55.283948  991880 certs.go:480] ignoring /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311_empty.pem, impossibly tiny 0 bytes
	I0314 19:25:55.283962  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca-key.pem (1679 bytes)
	I0314 19:25:55.283993  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/ca.pem (1082 bytes)
	I0314 19:25:55.284031  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/cert.pem (1123 bytes)
	I0314 19:25:55.284066  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/certs/key.pem (1675 bytes)
	I0314 19:25:55.284121  991880 certs.go:484] found cert: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem (1708 bytes)
	I0314 19:25:55.284976  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /var/lib/minikube/certs/ca.crt (1111 bytes)
	I0314 19:25:55.326779  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.key --> /var/lib/minikube/certs/ca.key (1675 bytes)
	I0314 19:25:55.376167  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.crt --> /var/lib/minikube/certs/proxy-client-ca.crt (1119 bytes)
	I0314 19:25:55.405828  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/proxy-client-ca.key --> /var/lib/minikube/certs/proxy-client-ca.key (1679 bytes)
	I0314 19:25:55.458807  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.crt --> /var/lib/minikube/certs/apiserver.crt (1424 bytes)
	I0314 19:25:55.494051  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/apiserver.key --> /var/lib/minikube/certs/apiserver.key (1679 bytes)
	I0314 19:25:55.531015  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.crt --> /var/lib/minikube/certs/proxy-client.crt (1147 bytes)
	I0314 19:25:55.559184  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/proxy-client.key --> /var/lib/minikube/certs/proxy-client.key (1679 bytes)
	I0314 19:25:55.588905  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/certs/951311.pem --> /usr/share/ca-certificates/951311.pem (1338 bytes)
	I0314 19:25:55.616661  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/ssl/certs/9513112.pem --> /usr/share/ca-certificates/9513112.pem (1708 bytes)
	I0314 19:25:55.646728  991880 ssh_runner.go:362] scp /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt --> /usr/share/ca-certificates/minikubeCA.pem (1111 bytes)
	I0314 19:25:55.673995  991880 ssh_runner.go:362] scp memory --> /var/lib/minikube/kubeconfig (738 bytes)
	I0314 19:25:55.692276  991880 ssh_runner.go:195] Run: openssl version
	I0314 19:25:55.698918  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/minikubeCA.pem && ln -fs /usr/share/ca-certificates/minikubeCA.pem /etc/ssl/certs/minikubeCA.pem"
	I0314 19:25:55.711703  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717107  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1111 Mar 14 18:05 /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.717177  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/minikubeCA.pem
	I0314 19:25:55.723435  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/b5213941.0 || ln -fs /etc/ssl/certs/minikubeCA.pem /etc/ssl/certs/b5213941.0"
	I0314 19:25:55.736575  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/951311.pem && ln -fs /usr/share/ca-certificates/951311.pem /etc/ssl/certs/951311.pem"
	I0314 19:25:55.749982  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755614  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1338 Mar 14 18:14 /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.755680  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/951311.pem
	I0314 19:25:55.762122  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/51391683.0 || ln -fs /etc/ssl/certs/951311.pem /etc/ssl/certs/51391683.0"
	I0314 19:25:55.774447  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -s /usr/share/ca-certificates/9513112.pem && ln -fs /usr/share/ca-certificates/9513112.pem /etc/ssl/certs/9513112.pem"
	I0314 19:25:55.786787  991880 ssh_runner.go:195] Run: ls -la /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791855  991880 certs.go:528] hashing: -rw-r--r-- 1 root root 1708 Mar 14 18:14 /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.791901  991880 ssh_runner.go:195] Run: openssl x509 -hash -noout -in /usr/share/ca-certificates/9513112.pem
	I0314 19:25:55.798041  991880 ssh_runner.go:195] Run: sudo /bin/bash -c "test -L /etc/ssl/certs/3ec20f2e.0 || ln -fs /etc/ssl/certs/9513112.pem /etc/ssl/certs/3ec20f2e.0"
	I0314 19:25:55.810324  991880 ssh_runner.go:195] Run: stat /var/lib/minikube/certs/apiserver-kubelet-client.crt
	I0314 19:25:55.815698  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-etcd-client.crt -checkend 86400
	I0314 19:25:55.822389  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/apiserver-kubelet-client.crt -checkend 86400
	I0314 19:25:55.829046  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/server.crt -checkend 86400
	I0314 19:25:55.835660  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/healthcheck-client.crt -checkend 86400
	I0314 19:25:55.843075  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/etcd/peer.crt -checkend 86400
	I0314 19:25:55.849353  991880 ssh_runner.go:195] Run: openssl x509 -noout -in /var/lib/minikube/certs/front-proxy-client.crt -checkend 86400
	I0314 19:25:55.855678  991880 kubeadm.go:391] StartCluster: {Name:no-preload-731976 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:2200 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29
.0-rc.2 ClusterName:no-preload-731976 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true metrics-server:true storage-provisioner:true] CustomAddonImages:map[MetricsServer:registry.k8s.io/echoserver:1.4] CustomAddonRegistries:map[MetricsServer:fake.domain] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s
Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 19:25:55.855799  991880 cri.go:54] listing CRI containers in root : {State:paused Name: Namespaces:[kube-system]}
	I0314 19:25:55.855834  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.906341  991880 cri.go:89] found id: ""
	I0314 19:25:55.906408  991880 ssh_runner.go:195] Run: sudo ls /var/lib/kubelet/kubeadm-flags.env /var/lib/kubelet/config.yaml /var/lib/minikube/etcd
	W0314 19:25:55.918790  991880 kubeadm.go:404] apiserver tunnel failed: apiserver port not set
	I0314 19:25:55.918819  991880 kubeadm.go:407] found existing configuration files, will attempt cluster restart
	I0314 19:25:55.918826  991880 kubeadm.go:587] restartPrimaryControlPlane start ...
	I0314 19:25:55.918875  991880 ssh_runner.go:195] Run: sudo test -d /data/minikube
	I0314 19:25:55.929988  991880 kubeadm.go:129] /data/minikube skipping compat symlinks: sudo test -d /data/minikube: Process exited with status 1
	stdout:
	
	stderr:
	I0314 19:25:55.931422  991880 kubeconfig.go:125] found "no-preload-731976" server: "https://192.168.39.148:8443"
	I0314 19:25:55.933865  991880 ssh_runner.go:195] Run: sudo diff -u /var/tmp/minikube/kubeadm.yaml /var/tmp/minikube/kubeadm.yaml.new
	I0314 19:25:55.946711  991880 kubeadm.go:624] The running cluster does not require reconfiguration: 192.168.39.148
	I0314 19:25:55.946743  991880 kubeadm.go:1153] stopping kube-system containers ...
	I0314 19:25:55.946757  991880 cri.go:54] listing CRI containers in root : {State:all Name: Namespaces:[kube-system]}
	I0314 19:25:55.946812  991880 ssh_runner.go:195] Run: sudo -s eval "crictl ps -a --quiet --label io.kubernetes.pod.namespace=kube-system"
	I0314 19:25:55.998884  991880 cri.go:89] found id: ""
	I0314 19:25:55.998971  991880 ssh_runner.go:195] Run: sudo systemctl stop kubelet
	I0314 19:25:56.018919  991880 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:25:56.030467  991880 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:25:56.030497  991880 kubeadm.go:156] found existing configuration files:
	
	I0314 19:25:56.030558  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:25:56.041403  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:25:56.041465  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:25:56.052140  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:25:56.062366  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:25:56.062420  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:25:56.075847  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.086246  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:25:56.086295  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:25:56.097148  991880 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:25:56.106718  991880 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:25:56.106756  991880 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:25:56.118337  991880 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:25:56.131893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase certs all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:56.264399  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.282634  991880 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubeconfig all --config /var/tmp/minikube/kubeadm.yaml": (1.018196302s)
	I0314 19:25:57.282664  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase kubelet-start --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.524172  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase control-plane all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.626554  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase etcd local --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:25:57.772151  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:25:57.772255  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.273445  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.772397  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:58.803788  991880 api_server.go:72] duration metric: took 1.031637073s to wait for apiserver process to appear ...
	I0314 19:25:58.803816  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:25:58.803835  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:25:58.804392  991880 api_server.go:269] stopped: https://192.168.39.148:8443/healthz: Get "https://192.168.39.148:8443/healthz": dial tcp 192.168.39.148:8443: connect: connection refused
	I0314 19:25:58.445134  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:00.447429  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.304059  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.588183  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.588231  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.588251  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.632993  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	W0314 19:26:01.633030  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 403:
	{"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/healthz\"","reason":"Forbidden","details":{},"code":403}
	I0314 19:26:01.804404  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:01.862306  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:01.862370  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[-]poststarthook/bootstrap-controller failed: reason withheld
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.304525  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.309902  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	W0314 19:26:02.309933  991880 api_server.go:103] status: https://192.168.39.148:8443/healthz returned error 500:
	[+]ping ok
	[+]log ok
	[+]etcd ok
	[+]poststarthook/start-kube-apiserver-admission-initializer ok
	[+]poststarthook/generic-apiserver-start-informers ok
	[+]poststarthook/priority-and-fairness-config-consumer ok
	[+]poststarthook/priority-and-fairness-filter ok
	[+]poststarthook/storage-object-count-tracker-hook ok
	[+]poststarthook/start-apiextensions-informers ok
	[+]poststarthook/start-apiextensions-controllers ok
	[+]poststarthook/crd-informer-synced ok
	[+]poststarthook/start-service-ip-repair-controllers ok
	[-]poststarthook/rbac/bootstrap-roles failed: reason withheld
	[-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld
	[+]poststarthook/priority-and-fairness-config-producer ok
	[+]poststarthook/start-system-namespaces-controller ok
	[+]poststarthook/bootstrap-controller ok
	[+]poststarthook/start-cluster-authentication-info-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-controller ok
	[+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok
	[+]poststarthook/start-legacy-token-tracking-controller ok
	[+]poststarthook/aggregator-reload-proxy-client-cert ok
	[+]poststarthook/start-kube-aggregator-informers ok
	[+]poststarthook/apiservice-registration-controller ok
	[+]poststarthook/apiservice-status-available-controller ok
	[+]poststarthook/kube-apiserver-autoregistration ok
	[+]autoregister-completion ok
	[+]poststarthook/apiservice-openapi-controller ok
	[+]poststarthook/apiservice-openapiv3-controller ok
	[+]poststarthook/apiservice-discovery-controller ok
	healthz check failed
	I0314 19:26:02.804296  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:26:02.812245  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:26:02.830235  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:26:02.830268  991880 api_server.go:131] duration metric: took 4.026443836s to wait for apiserver health ...
	I0314 19:26:02.830281  991880 cni.go:84] Creating CNI manager for ""
	I0314 19:26:02.830289  991880 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:26:02.832051  991880 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:25:59.407314  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:01.906570  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:25:59.120306  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:25:59.620183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.119877  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:00.619283  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.119314  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:01.620175  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:02.120113  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:02.120198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:02.173354  992344 cri.go:89] found id: ""
	I0314 19:26:02.173388  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.173421  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:02.173430  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:02.173509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:02.213519  992344 cri.go:89] found id: ""
	I0314 19:26:02.213555  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.213567  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:02.213574  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:02.213689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:02.259387  992344 cri.go:89] found id: ""
	I0314 19:26:02.259423  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.259435  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:02.259443  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:02.259511  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:02.308335  992344 cri.go:89] found id: ""
	I0314 19:26:02.308362  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.308373  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:02.308381  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:02.308441  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:02.353065  992344 cri.go:89] found id: ""
	I0314 19:26:02.353092  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.353101  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:02.353106  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:02.353183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:02.394305  992344 cri.go:89] found id: ""
	I0314 19:26:02.394342  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.394355  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:02.394365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:02.394443  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:02.441693  992344 cri.go:89] found id: ""
	I0314 19:26:02.441731  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.441743  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:02.441751  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:02.441816  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:02.479786  992344 cri.go:89] found id: ""
	I0314 19:26:02.479810  992344 logs.go:276] 0 containers: []
	W0314 19:26:02.479818  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:02.479827  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:02.479858  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:02.494835  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:02.494865  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:02.660069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:02.660114  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:02.660134  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:02.732148  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:02.732187  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:02.780910  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:02.780942  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:02.833411  991880 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:26:02.852441  991880 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:26:02.875033  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:26:02.891957  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:26:02.892003  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running / Ready:ContainersNotReady (containers with unready status: [coredns]) / ContainersReady:ContainersNotReady (containers with unready status: [coredns])
	I0314 19:26:02.892016  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running / Ready:ContainersNotReady (containers with unready status: [etcd]) / ContainersReady:ContainersNotReady (containers with unready status: [etcd])
	I0314 19:26:02.892034  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running / Ready:ContainersNotReady (containers with unready status: [kube-apiserver]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-apiserver])
	I0314 19:26:02.892045  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running / Ready:ContainersNotReady (containers with unready status: [kube-controller-manager]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-controller-manager])
	I0314 19:26:02.892062  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:26:02.892072  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running / Ready:ContainersNotReady (containers with unready status: [kube-scheduler]) / ContainersReady:ContainersNotReady (containers with unready status: [kube-scheduler])
	I0314 19:26:02.892087  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:26:02.892098  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:26:02.892109  991880 system_pods.go:74] duration metric: took 17.053651ms to wait for pod list to return data ...
	I0314 19:26:02.892122  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:26:02.896049  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:26:02.896076  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:26:02.896087  991880 node_conditions.go:105] duration metric: took 3.958558ms to run NodePressure ...
	I0314 19:26:02.896104  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.29.0-rc.2:$PATH" kubeadm init phase addon all --config /var/tmp/minikube/kubeadm.yaml"
	I0314 19:26:03.183167  991880 kubeadm.go:718] waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187696  991880 kubeadm.go:733] kubelet initialised
	I0314 19:26:03.187722  991880 kubeadm.go:734] duration metric: took 4.517639ms waiting for restarted kubelet to initialise ...
	I0314 19:26:03.187734  991880 pod_ready.go:35] extra waiting up to 4m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:03.193263  991880 pod_ready.go:78] waiting up to 4m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.198068  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198100  991880 pod_ready.go:81] duration metric: took 4.803067ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.198112  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "coredns-76f75df574-mcddh" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.198125  991880 pod_ready.go:78] waiting up to 4m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.202418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202440  991880 pod_ready.go:81] duration metric: took 4.299898ms for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.202453  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "etcd-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.202458  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.207418  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207440  991880 pod_ready.go:81] duration metric: took 4.975588ms for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.207447  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-apiserver-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.207453  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.278880  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278907  991880 pod_ready.go:81] duration metric: took 71.446692ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.278916  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.278922  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:03.679262  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679298  991880 pod_ready.go:81] duration metric: took 400.3668ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:03.679308  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-proxy-fkn7b" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:03.679315  991880 pod_ready.go:78] waiting up to 4m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.078953  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.078992  991880 pod_ready.go:81] duration metric: took 399.668454ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.079014  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "kube-scheduler-no-preload-731976" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.079023  991880 pod_ready.go:78] waiting up to 4m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:04.479041  991880 pod_ready.go:97] node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479069  991880 pod_ready.go:81] duration metric: took 400.034338ms for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:26:04.479078  991880 pod_ready.go:66] WaitExtra: waitPodCondition: node "no-preload-731976" hosting pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace is currently not "Ready" (skipping!): node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:04.479084  991880 pod_ready.go:38] duration metric: took 1.291340313s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:04.479109  991880 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:26:04.493423  991880 ops.go:34] apiserver oom_adj: -16
	I0314 19:26:04.493444  991880 kubeadm.go:591] duration metric: took 8.574611355s to restartPrimaryControlPlane
	I0314 19:26:04.493451  991880 kubeadm.go:393] duration metric: took 8.63778247s to StartCluster
	I0314 19:26:04.493495  991880 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.493576  991880 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:26:04.495275  991880 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:26:04.495648  991880 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.39.148 Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:26:04.497346  991880 out.go:177] * Verifying Kubernetes components...
	I0314 19:26:04.495716  991880 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:26:04.495843  991880 config.go:182] Loaded profile config "no-preload-731976": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.29.0-rc.2
	I0314 19:26:04.498678  991880 addons.go:69] Setting storage-provisioner=true in profile "no-preload-731976"
	I0314 19:26:04.498694  991880 addons.go:69] Setting metrics-server=true in profile "no-preload-731976"
	I0314 19:26:04.498719  991880 addons.go:234] Setting addon metrics-server=true in "no-preload-731976"
	I0314 19:26:04.498724  991880 addons.go:234] Setting addon storage-provisioner=true in "no-preload-731976"
	W0314 19:26:04.498725  991880 addons.go:243] addon metrics-server should already be in state true
	W0314 19:26:04.498735  991880 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:26:04.498755  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498764  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.498685  991880 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:26:04.498684  991880 addons.go:69] Setting default-storageclass=true in profile "no-preload-731976"
	I0314 19:26:04.498902  991880 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "no-preload-731976"
	I0314 19:26:04.499116  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499128  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499146  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499151  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.499275  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.499306  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.515926  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42749
	I0314 19:26:04.516541  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.517336  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.517380  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.517903  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.518496  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.518530  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.519804  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40163
	I0314 19:26:04.519877  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46313
	I0314 19:26:04.520294  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520433  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.520844  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.520874  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521224  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.521279  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.521294  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.521512  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.521839  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.522431  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.522462  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.524933  991880 addons.go:234] Setting addon default-storageclass=true in "no-preload-731976"
	W0314 19:26:04.524953  991880 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:26:04.524977  991880 host.go:66] Checking if "no-preload-731976" exists ...
	I0314 19:26:04.525238  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.525267  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.535073  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:42539
	I0314 19:26:04.535555  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.536084  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.536110  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.536455  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.536608  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.537991  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32893
	I0314 19:26:04.538320  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.538560  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.540272  991880 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:26:04.539087  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.541445  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.541556  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:26:04.541574  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:26:04.541590  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.541837  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.542000  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.544178  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.544425  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45761
	I0314 19:26:04.545832  991880 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:26:04.544882  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.544974  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.545887  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.545912  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.545529  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.547028  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.547051  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.546085  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.547153  991880 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.547187  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:26:04.547205  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.547263  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.547420  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.547492  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.548137  991880 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:26:04.548250  991880 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:26:04.549851  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550280  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.550310  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.550441  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.550642  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.550806  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.550933  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.594092  991880 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35241
	I0314 19:26:04.594507  991880 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:26:04.595046  991880 main.go:141] libmachine: Using API Version  1
	I0314 19:26:04.595068  991880 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:26:04.595380  991880 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:26:04.595600  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetState
	I0314 19:26:04.597532  991880 main.go:141] libmachine: (no-preload-731976) Calling .DriverName
	I0314 19:26:04.597819  991880 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.597841  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:26:04.597860  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHHostname
	I0314 19:26:04.600392  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600790  991880 main.go:141] libmachine: (no-preload-731976) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:57:0e:67", ip: ""} in network mk-no-preload-731976: {Iface:virbr1 ExpiryTime:2024-03-14 20:15:09 +0000 UTC Type:0 Mac:52:54:00:57:0e:67 Iaid: IPaddr:192.168.39.148 Prefix:24 Hostname:no-preload-731976 Clientid:01:52:54:00:57:0e:67}
	I0314 19:26:04.600822  991880 main.go:141] libmachine: (no-preload-731976) DBG | domain no-preload-731976 has defined IP address 192.168.39.148 and MAC address 52:54:00:57:0e:67 in network mk-no-preload-731976
	I0314 19:26:04.600932  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHPort
	I0314 19:26:04.601112  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHKeyPath
	I0314 19:26:04.601282  991880 main.go:141] libmachine: (no-preload-731976) Calling .GetSSHUsername
	I0314 19:26:04.601422  991880 sshutil.go:53] new ssh client: &{IP:192.168.39.148 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/no-preload-731976/id_rsa Username:docker}
	I0314 19:26:04.698561  991880 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:26:04.717893  991880 node_ready.go:35] waiting up to 6m0s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:04.789158  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:26:04.874271  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:26:04.874299  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:26:04.897643  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:26:04.915424  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:26:04.915447  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:26:04.962912  991880 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:04.962936  991880 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:26:05.037223  991880 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:26:05.140432  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140464  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.140791  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.140832  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.140858  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.140873  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.141237  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.141256  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:05.141341  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:05.147523  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:05.147539  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:05.147796  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:05.147815  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.021360  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.123678174s)
	I0314 19:26:06.021425  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.021439  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023327  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.023341  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.023364  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023384  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.023398  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.023662  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.025042  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.023698  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.063870  991880 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.026588061s)
	I0314 19:26:06.063950  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.063961  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064301  991880 main.go:141] libmachine: (no-preload-731976) DBG | Closing plugin on server side
	I0314 19:26:06.064325  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064381  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064395  991880 main.go:141] libmachine: Making call to close driver server
	I0314 19:26:06.064404  991880 main.go:141] libmachine: (no-preload-731976) Calling .Close
	I0314 19:26:06.064668  991880 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:26:06.064685  991880 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:26:06.064698  991880 addons.go:470] Verifying addon metrics-server=true in "no-preload-731976"
	I0314 19:26:06.066642  991880 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:26:02.945917  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.446120  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:03.906603  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.908049  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:05.359638  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:05.377722  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:05.377799  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:05.436279  992344 cri.go:89] found id: ""
	I0314 19:26:05.436316  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.436330  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:05.436338  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:05.436402  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:05.482775  992344 cri.go:89] found id: ""
	I0314 19:26:05.482822  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.482853  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:05.482861  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:05.482933  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:05.542954  992344 cri.go:89] found id: ""
	I0314 19:26:05.542986  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.542996  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:05.543003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:05.543069  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:05.582596  992344 cri.go:89] found id: ""
	I0314 19:26:05.582630  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.582643  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:05.582651  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:05.582716  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:05.623720  992344 cri.go:89] found id: ""
	I0314 19:26:05.623750  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.623762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:05.623770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:05.623828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:05.669868  992344 cri.go:89] found id: ""
	I0314 19:26:05.669946  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.669962  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:05.669974  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:05.670045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:05.718786  992344 cri.go:89] found id: ""
	I0314 19:26:05.718816  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.718827  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:05.718834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:05.718905  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:05.761781  992344 cri.go:89] found id: ""
	I0314 19:26:05.761817  992344 logs.go:276] 0 containers: []
	W0314 19:26:05.761828  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:05.761841  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:05.761856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:05.826095  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:05.826131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:05.842893  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:05.842928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:05.937536  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:05.937567  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:05.937585  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:06.013419  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:06.013465  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:08.560995  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:08.576897  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:08.576964  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:08.617367  992344 cri.go:89] found id: ""
	I0314 19:26:08.617395  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.617406  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:08.617412  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:08.617471  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:08.655448  992344 cri.go:89] found id: ""
	I0314 19:26:08.655480  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.655492  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:08.655498  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:08.656004  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:08.696167  992344 cri.go:89] found id: ""
	I0314 19:26:08.696197  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.696206  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:08.696231  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:08.696294  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:06.067992  991880 addons.go:505] duration metric: took 1.572277081s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:26:06.722889  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:07.943306  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:09.945181  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.407517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:10.908715  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:08.736061  992344 cri.go:89] found id: ""
	I0314 19:26:08.736088  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.736096  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:08.736102  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:08.736168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:08.782458  992344 cri.go:89] found id: ""
	I0314 19:26:08.782490  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.782501  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:08.782508  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:08.782585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:08.833616  992344 cri.go:89] found id: ""
	I0314 19:26:08.833647  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.833659  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:08.833667  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:08.833734  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:08.875871  992344 cri.go:89] found id: ""
	I0314 19:26:08.875900  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.875909  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:08.875914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:08.875972  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:08.921763  992344 cri.go:89] found id: ""
	I0314 19:26:08.921793  992344 logs.go:276] 0 containers: []
	W0314 19:26:08.921804  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:08.921816  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:08.921834  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:08.937716  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:08.937748  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:09.024271  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.024295  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:09.024309  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:09.098600  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:09.098636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:09.146178  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:09.146226  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:11.698261  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:11.715209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:11.715285  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:11.758631  992344 cri.go:89] found id: ""
	I0314 19:26:11.758664  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.758680  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:11.758688  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:11.758758  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:11.798229  992344 cri.go:89] found id: ""
	I0314 19:26:11.798258  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.798268  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:11.798274  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:11.798341  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:11.838801  992344 cri.go:89] found id: ""
	I0314 19:26:11.838837  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.838849  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:11.838857  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:11.838925  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:11.884460  992344 cri.go:89] found id: ""
	I0314 19:26:11.884495  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.884507  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:11.884515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:11.884577  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:11.937743  992344 cri.go:89] found id: ""
	I0314 19:26:11.937770  992344 logs.go:276] 0 containers: []
	W0314 19:26:11.937781  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:11.937789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:11.937852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:12.007509  992344 cri.go:89] found id: ""
	I0314 19:26:12.007542  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.007552  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:12.007561  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:12.007640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:12.068478  992344 cri.go:89] found id: ""
	I0314 19:26:12.068514  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.068523  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:12.068529  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:12.068592  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:12.108658  992344 cri.go:89] found id: ""
	I0314 19:26:12.108699  992344 logs.go:276] 0 containers: []
	W0314 19:26:12.108712  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:12.108725  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:12.108754  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:12.195134  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:12.195170  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:12.240710  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:12.240746  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:12.297470  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:12.297506  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:12.312552  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:12.312581  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:12.392069  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:09.222189  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:11.223717  991880 node_ready.go:53] node "no-preload-731976" has status "Ready":"False"
	I0314 19:26:12.226297  991880 node_ready.go:49] node "no-preload-731976" has status "Ready":"True"
	I0314 19:26:12.226328  991880 node_ready.go:38] duration metric: took 7.508398002s for node "no-preload-731976" to be "Ready" ...
	I0314 19:26:12.226343  991880 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:26:12.234015  991880 pod_ready.go:78] waiting up to 6m0s for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242287  991880 pod_ready.go:92] pod "coredns-76f75df574-mcddh" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:12.242314  991880 pod_ready.go:81] duration metric: took 8.261811ms for pod "coredns-76f75df574-mcddh" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.242325  991880 pod_ready.go:78] waiting up to 6m0s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252237  991880 pod_ready.go:92] pod "etcd-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:13.252268  991880 pod_ready.go:81] duration metric: took 1.00993426s for pod "etcd-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:13.252277  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:12.443709  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.943804  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:13.407905  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:15.906891  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.907361  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:14.893036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:14.909532  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:14.909603  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:14.958974  992344 cri.go:89] found id: ""
	I0314 19:26:14.959001  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.959010  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:14.959016  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:14.959071  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:14.996462  992344 cri.go:89] found id: ""
	I0314 19:26:14.996496  992344 logs.go:276] 0 containers: []
	W0314 19:26:14.996509  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:14.996516  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:14.996584  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:15.038159  992344 cri.go:89] found id: ""
	I0314 19:26:15.038192  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.038200  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:15.038214  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:15.038280  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:15.077455  992344 cri.go:89] found id: ""
	I0314 19:26:15.077486  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.077498  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:15.077506  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:15.077595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:15.117873  992344 cri.go:89] found id: ""
	I0314 19:26:15.117905  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.117914  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:15.117921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:15.117984  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:15.156493  992344 cri.go:89] found id: ""
	I0314 19:26:15.156528  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.156541  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:15.156549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:15.156615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:15.195036  992344 cri.go:89] found id: ""
	I0314 19:26:15.195065  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.195073  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:15.195079  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:15.195131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:15.237570  992344 cri.go:89] found id: ""
	I0314 19:26:15.237607  992344 logs.go:276] 0 containers: []
	W0314 19:26:15.237619  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:15.237631  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:15.237646  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:15.323818  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:15.323871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:15.370068  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:15.370110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:15.425984  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:15.426018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.442475  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:15.442513  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:15.519714  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.019937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:18.036457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:18.036534  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:18.076226  992344 cri.go:89] found id: ""
	I0314 19:26:18.076256  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.076268  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:18.076275  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:18.076339  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:18.112355  992344 cri.go:89] found id: ""
	I0314 19:26:18.112390  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.112401  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:18.112409  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:18.112475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:18.148502  992344 cri.go:89] found id: ""
	I0314 19:26:18.148533  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.148544  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:18.148551  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:18.148625  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:18.185085  992344 cri.go:89] found id: ""
	I0314 19:26:18.185114  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.185121  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:18.185127  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:18.185192  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:18.226487  992344 cri.go:89] found id: ""
	I0314 19:26:18.226512  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.226520  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:18.226527  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:18.226595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:18.274014  992344 cri.go:89] found id: ""
	I0314 19:26:18.274044  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.274053  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:18.274062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:18.274155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:18.318696  992344 cri.go:89] found id: ""
	I0314 19:26:18.318729  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.318741  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:18.318749  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:18.318821  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:18.361430  992344 cri.go:89] found id: ""
	I0314 19:26:18.361459  992344 logs.go:276] 0 containers: []
	W0314 19:26:18.361467  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:18.361477  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:18.361489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:18.442041  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:18.442062  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:18.442082  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:18.522821  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:18.522863  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:18.565896  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:18.565935  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:18.620887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:18.620924  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:15.268738  991880 pod_ready.go:102] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:16.758759  991880 pod_ready.go:92] pod "kube-apiserver-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.758794  991880 pod_ready.go:81] duration metric: took 3.50650262s for pod "kube-apiserver-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.758807  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.763984  991880 pod_ready.go:92] pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.764010  991880 pod_ready.go:81] duration metric: took 5.192518ms for pod "kube-controller-manager-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.764021  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770418  991880 pod_ready.go:92] pod "kube-proxy-fkn7b" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.770442  991880 pod_ready.go:81] duration metric: took 6.412988ms for pod "kube-proxy-fkn7b" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.770453  991880 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775342  991880 pod_ready.go:92] pod "kube-scheduler-no-preload-731976" in "kube-system" namespace has status "Ready":"True"
	I0314 19:26:16.775367  991880 pod_ready.go:81] duration metric: took 4.906261ms for pod "kube-scheduler-no-preload-731976" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:16.775378  991880 pod_ready.go:78] waiting up to 6m0s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	I0314 19:26:18.782444  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:17.443755  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.446058  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:19.907866  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.407195  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.136379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:21.153065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:21.153159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:21.198345  992344 cri.go:89] found id: ""
	I0314 19:26:21.198376  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.198386  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:21.198393  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:21.198465  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:21.240699  992344 cri.go:89] found id: ""
	I0314 19:26:21.240738  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.240747  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:21.240753  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:21.240805  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:21.280891  992344 cri.go:89] found id: ""
	I0314 19:26:21.280978  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.280994  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:21.281004  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:21.281074  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:21.320316  992344 cri.go:89] found id: ""
	I0314 19:26:21.320348  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.320360  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:21.320369  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:21.320428  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:21.367972  992344 cri.go:89] found id: ""
	I0314 19:26:21.368006  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.368018  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:21.368024  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:21.368091  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:21.406060  992344 cri.go:89] found id: ""
	I0314 19:26:21.406090  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.406101  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:21.406108  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:21.406175  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:21.450885  992344 cri.go:89] found id: ""
	I0314 19:26:21.450908  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.450927  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:21.450933  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:21.450992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:21.497391  992344 cri.go:89] found id: ""
	I0314 19:26:21.497424  992344 logs.go:276] 0 containers: []
	W0314 19:26:21.497436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:21.497453  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:21.497471  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:21.547789  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:21.547819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:21.604433  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:21.604482  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:21.619977  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:21.620005  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:21.695604  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:21.695629  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:21.695643  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:20.782765  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:22.786856  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:21.943234  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:23.943336  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:25.944005  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.407901  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:26.906562  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:24.274618  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:24.290815  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:24.290891  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:24.330657  992344 cri.go:89] found id: ""
	I0314 19:26:24.330694  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.330706  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:24.330718  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:24.330788  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:24.373140  992344 cri.go:89] found id: ""
	I0314 19:26:24.373192  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.373206  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:24.373214  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:24.373295  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:24.412131  992344 cri.go:89] found id: ""
	I0314 19:26:24.412161  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.412183  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:24.412191  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:24.412281  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:24.453506  992344 cri.go:89] found id: ""
	I0314 19:26:24.453535  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.453546  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:24.453554  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:24.453621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:24.495345  992344 cri.go:89] found id: ""
	I0314 19:26:24.495379  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.495391  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:24.495399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:24.495468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:24.534744  992344 cri.go:89] found id: ""
	I0314 19:26:24.534770  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.534779  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:24.534785  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:24.534847  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:24.573594  992344 cri.go:89] found id: ""
	I0314 19:26:24.573621  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.573629  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:24.573635  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:24.573685  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:24.612677  992344 cri.go:89] found id: ""
	I0314 19:26:24.612708  992344 logs.go:276] 0 containers: []
	W0314 19:26:24.612718  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:24.612730  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:24.612747  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:24.664393  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:24.664426  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:24.679911  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:24.679945  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:24.767513  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:24.767560  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:24.767580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:24.853448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:24.853491  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.398576  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:27.414665  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:27.414749  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:27.461901  992344 cri.go:89] found id: ""
	I0314 19:26:27.461930  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.461938  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:27.461944  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:27.462009  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:27.502865  992344 cri.go:89] found id: ""
	I0314 19:26:27.502893  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.502902  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:27.502908  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:27.502966  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:27.542327  992344 cri.go:89] found id: ""
	I0314 19:26:27.542374  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.542387  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:27.542396  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:27.542484  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:27.583269  992344 cri.go:89] found id: ""
	I0314 19:26:27.583295  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.583304  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:27.583310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:27.583375  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:27.620426  992344 cri.go:89] found id: ""
	I0314 19:26:27.620467  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.620483  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:27.620491  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:27.620560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:27.659165  992344 cri.go:89] found id: ""
	I0314 19:26:27.659198  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.659214  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:27.659222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:27.659291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:27.701565  992344 cri.go:89] found id: ""
	I0314 19:26:27.701600  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.701609  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:27.701615  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:27.701706  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:27.739782  992344 cri.go:89] found id: ""
	I0314 19:26:27.739813  992344 logs.go:276] 0 containers: []
	W0314 19:26:27.739822  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:27.739832  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:27.739847  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:27.757112  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:27.757146  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:27.844634  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:27.844670  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:27.844688  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:27.928687  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:27.928720  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:27.976582  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:27.976614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:25.282663  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:27.783551  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.443159  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.943660  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:28.908305  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.908486  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:30.536573  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:30.551552  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:30.551624  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.590498  992344 cri.go:89] found id: ""
	I0314 19:26:30.590528  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.590541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:30.590550  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:30.590612  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:30.629891  992344 cri.go:89] found id: ""
	I0314 19:26:30.629922  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.629945  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:30.629960  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:30.630031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:30.672557  992344 cri.go:89] found id: ""
	I0314 19:26:30.672592  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.672604  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:30.672611  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:30.672675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:30.709889  992344 cri.go:89] found id: ""
	I0314 19:26:30.709998  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.710026  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:30.710034  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:30.710103  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:30.749044  992344 cri.go:89] found id: ""
	I0314 19:26:30.749078  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.749090  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:30.749097  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:30.749167  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:30.794111  992344 cri.go:89] found id: ""
	I0314 19:26:30.794136  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.794146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:30.794154  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:30.794229  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:30.837175  992344 cri.go:89] found id: ""
	I0314 19:26:30.837204  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.837213  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:30.837220  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:30.837276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:30.875977  992344 cri.go:89] found id: ""
	I0314 19:26:30.876012  992344 logs.go:276] 0 containers: []
	W0314 19:26:30.876026  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:30.876039  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:30.876077  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:30.965922  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:30.965963  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:31.011002  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:31.011041  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:31.067381  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:31.067415  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:31.082515  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:31.082547  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:31.158951  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:33.659376  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:33.673829  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:33.673889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:30.283175  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:32.285501  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.446301  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.942963  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.407396  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:35.906104  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:33.718619  992344 cri.go:89] found id: ""
	I0314 19:26:33.718655  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.718667  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:33.718675  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:33.718752  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:33.760408  992344 cri.go:89] found id: ""
	I0314 19:26:33.760443  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.760455  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:33.760463  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:33.760532  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:33.803648  992344 cri.go:89] found id: ""
	I0314 19:26:33.803683  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.803697  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:33.803706  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:33.803770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:33.845297  992344 cri.go:89] found id: ""
	I0314 19:26:33.845332  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.845344  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:33.845352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:33.845420  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:33.885826  992344 cri.go:89] found id: ""
	I0314 19:26:33.885862  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.885873  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:33.885881  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:33.885953  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:33.930611  992344 cri.go:89] found id: ""
	I0314 19:26:33.930641  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.930652  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:33.930659  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:33.930720  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:33.975523  992344 cri.go:89] found id: ""
	I0314 19:26:33.975558  992344 logs.go:276] 0 containers: []
	W0314 19:26:33.975569  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:33.975592  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:33.975671  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:34.021004  992344 cri.go:89] found id: ""
	I0314 19:26:34.021039  992344 logs.go:276] 0 containers: []
	W0314 19:26:34.021048  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:34.021058  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:34.021072  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.066775  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:34.066808  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:34.123513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:34.123555  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:34.138355  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:34.138390  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:34.210698  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:34.210733  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:34.210752  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:36.801398  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:36.818486  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:36.818561  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:36.864485  992344 cri.go:89] found id: ""
	I0314 19:26:36.864510  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.864519  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:36.864525  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:36.864585  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:36.908438  992344 cri.go:89] found id: ""
	I0314 19:26:36.908468  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.908478  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:36.908486  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:36.908554  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:36.947578  992344 cri.go:89] found id: ""
	I0314 19:26:36.947605  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.947613  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:36.947618  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:36.947664  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:36.985495  992344 cri.go:89] found id: ""
	I0314 19:26:36.985526  992344 logs.go:276] 0 containers: []
	W0314 19:26:36.985537  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:36.985545  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:36.985609  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:37.027897  992344 cri.go:89] found id: ""
	I0314 19:26:37.027929  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.027947  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:37.027955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:37.028024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:37.066665  992344 cri.go:89] found id: ""
	I0314 19:26:37.066702  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.066716  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:37.066726  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:37.066818  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:37.104882  992344 cri.go:89] found id: ""
	I0314 19:26:37.104911  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.104920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:37.104926  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:37.104989  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:37.150288  992344 cri.go:89] found id: ""
	I0314 19:26:37.150318  992344 logs.go:276] 0 containers: []
	W0314 19:26:37.150326  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:37.150338  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:37.150356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:37.207269  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:37.207314  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:37.222256  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:37.222290  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:37.305854  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:37.305879  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:37.305894  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:37.391306  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:37.391343  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:34.784650  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:37.283602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.444420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.943754  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:38.406563  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:40.407414  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.905944  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:39.939379  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:39.955255  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:39.955317  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:39.996585  992344 cri.go:89] found id: ""
	I0314 19:26:39.996618  992344 logs.go:276] 0 containers: []
	W0314 19:26:39.996627  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:39.996633  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:39.996698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:40.038725  992344 cri.go:89] found id: ""
	I0314 19:26:40.038761  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.038774  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:40.038782  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:40.038846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:40.080619  992344 cri.go:89] found id: ""
	I0314 19:26:40.080656  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.080668  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:40.080677  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:40.080742  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:40.122120  992344 cri.go:89] found id: ""
	I0314 19:26:40.122163  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.122174  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:40.122182  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:40.122248  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:40.161563  992344 cri.go:89] found id: ""
	I0314 19:26:40.161594  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.161605  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:40.161612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:40.161680  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:40.200236  992344 cri.go:89] found id: ""
	I0314 19:26:40.200267  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.200278  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:40.200287  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:40.200358  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:40.237537  992344 cri.go:89] found id: ""
	I0314 19:26:40.237570  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.237581  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:40.237588  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:40.237657  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:40.293038  992344 cri.go:89] found id: ""
	I0314 19:26:40.293070  992344 logs.go:276] 0 containers: []
	W0314 19:26:40.293078  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:40.293086  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:40.293110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:40.307710  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:40.307742  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:40.385255  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:40.385278  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:40.385312  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:40.469385  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:40.469421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:40.513030  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:40.513064  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.069286  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:43.086066  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:43.086183  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:43.131373  992344 cri.go:89] found id: ""
	I0314 19:26:43.131400  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.131408  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:43.131414  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:43.131491  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:43.175283  992344 cri.go:89] found id: ""
	I0314 19:26:43.175311  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.175319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:43.175325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:43.175385  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:43.214979  992344 cri.go:89] found id: ""
	I0314 19:26:43.215006  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.215014  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:43.215020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:43.215072  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:43.252071  992344 cri.go:89] found id: ""
	I0314 19:26:43.252101  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.252110  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:43.252136  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:43.252200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:43.290310  992344 cri.go:89] found id: ""
	I0314 19:26:43.290341  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.290352  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:43.290359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:43.290426  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:43.330639  992344 cri.go:89] found id: ""
	I0314 19:26:43.330673  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.330684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:43.330692  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:43.330761  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:43.372669  992344 cri.go:89] found id: ""
	I0314 19:26:43.372698  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.372706  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:43.372712  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:43.372775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:43.416118  992344 cri.go:89] found id: ""
	I0314 19:26:43.416154  992344 logs.go:276] 0 containers: []
	W0314 19:26:43.416171  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:43.416184  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:43.416225  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:43.501495  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:43.501541  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:43.545898  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:43.545932  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:43.601172  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:43.601205  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:43.616307  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:43.616339  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:43.699003  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:39.782316  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:41.783130  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:43.783259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:42.943853  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:45.446214  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:44.907328  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.406383  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:46.199661  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:46.214256  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:46.214325  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:46.263891  992344 cri.go:89] found id: ""
	I0314 19:26:46.263921  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.263932  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:46.263940  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:46.264006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:46.303515  992344 cri.go:89] found id: ""
	I0314 19:26:46.303542  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.303551  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:46.303558  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:46.303634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:46.346323  992344 cri.go:89] found id: ""
	I0314 19:26:46.346358  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.346371  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:46.346378  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:46.346444  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:46.388459  992344 cri.go:89] found id: ""
	I0314 19:26:46.388490  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.388500  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:46.388507  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:46.388560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:46.428907  992344 cri.go:89] found id: ""
	I0314 19:26:46.428945  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.428957  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:46.428966  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:46.429032  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:46.475683  992344 cri.go:89] found id: ""
	I0314 19:26:46.475713  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.475724  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:46.475737  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:46.475803  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:46.514509  992344 cri.go:89] found id: ""
	I0314 19:26:46.514543  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.514552  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:46.514558  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:46.514621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:46.553984  992344 cri.go:89] found id: ""
	I0314 19:26:46.554012  992344 logs.go:276] 0 containers: []
	W0314 19:26:46.554023  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:46.554036  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:46.554054  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:46.615513  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:46.615548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:46.630491  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:46.630525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:46.733214  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:46.733250  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:46.733267  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:46.832662  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:46.832699  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:45.783361  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:48.283626  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:47.943882  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.944184  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.409215  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.907278  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:49.382361  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:49.398159  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:49.398220  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:49.441989  992344 cri.go:89] found id: ""
	I0314 19:26:49.442017  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.442027  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:49.442034  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:49.442110  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:49.484456  992344 cri.go:89] found id: ""
	I0314 19:26:49.484492  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.484503  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:49.484520  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:49.484587  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:49.522409  992344 cri.go:89] found id: ""
	I0314 19:26:49.522438  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.522449  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:49.522456  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:49.522509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:49.556955  992344 cri.go:89] found id: ""
	I0314 19:26:49.556983  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.556991  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:49.556996  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:49.557045  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:49.597924  992344 cri.go:89] found id: ""
	I0314 19:26:49.597960  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.597971  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:49.597987  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:49.598054  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:49.635744  992344 cri.go:89] found id: ""
	I0314 19:26:49.635780  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.635793  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:49.635801  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:49.635869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:49.678085  992344 cri.go:89] found id: ""
	I0314 19:26:49.678124  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.678136  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:49.678144  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:49.678247  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:49.714483  992344 cri.go:89] found id: ""
	I0314 19:26:49.714515  992344 logs.go:276] 0 containers: []
	W0314 19:26:49.714527  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:49.714538  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:49.714554  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:49.760438  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:49.760473  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:49.818954  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:49.818992  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:49.835609  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:49.835642  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:49.928723  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:49.928747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:49.928759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:52.517455  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:52.534986  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:52.535066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:52.580240  992344 cri.go:89] found id: ""
	I0314 19:26:52.580279  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.580292  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:52.580301  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:52.580367  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:52.644053  992344 cri.go:89] found id: ""
	I0314 19:26:52.644085  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.644096  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:52.644103  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:52.644171  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:52.706892  992344 cri.go:89] found id: ""
	I0314 19:26:52.706919  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.706928  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:52.706935  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:52.706986  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:52.761039  992344 cri.go:89] found id: ""
	I0314 19:26:52.761077  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.761090  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:52.761099  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:52.761173  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:52.806217  992344 cri.go:89] found id: ""
	I0314 19:26:52.806251  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.806263  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:52.806271  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:52.806415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:52.848417  992344 cri.go:89] found id: ""
	I0314 19:26:52.848448  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.848457  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:52.848464  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:52.848527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:52.890639  992344 cri.go:89] found id: ""
	I0314 19:26:52.890674  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.890687  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:52.890695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:52.890775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:52.934637  992344 cri.go:89] found id: ""
	I0314 19:26:52.934666  992344 logs.go:276] 0 containers: []
	W0314 19:26:52.934677  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:52.934690  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:52.934707  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:52.949797  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:52.949825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:53.033720  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:53.033751  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:53.033766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:53.113919  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:53.113960  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:53.163483  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:53.163525  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:50.781924  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:52.788346  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:51.945712  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:54.442871  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.443456  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:53.908184  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:56.407851  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:55.718119  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:55.733183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:55.733276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:55.778015  992344 cri.go:89] found id: ""
	I0314 19:26:55.778042  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.778050  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:55.778057  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:55.778146  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:55.829955  992344 cri.go:89] found id: ""
	I0314 19:26:55.829996  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.830011  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:55.830019  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:55.830089  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:55.872198  992344 cri.go:89] found id: ""
	I0314 19:26:55.872247  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.872260  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:55.872268  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:55.872327  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:55.916604  992344 cri.go:89] found id: ""
	I0314 19:26:55.916637  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.916649  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:55.916657  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:55.916725  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:55.957028  992344 cri.go:89] found id: ""
	I0314 19:26:55.957051  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.957060  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:55.957065  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:55.957118  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:55.996640  992344 cri.go:89] found id: ""
	I0314 19:26:55.996671  992344 logs.go:276] 0 containers: []
	W0314 19:26:55.996684  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:55.996695  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:55.996750  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:56.036638  992344 cri.go:89] found id: ""
	I0314 19:26:56.036688  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.036701  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:56.036709  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:56.036777  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:56.072594  992344 cri.go:89] found id: ""
	I0314 19:26:56.072624  992344 logs.go:276] 0 containers: []
	W0314 19:26:56.072633  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:56.072643  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:56.072657  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:56.129011  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:56.129044  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:56.143042  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:56.143075  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:56.232545  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:56.232574  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:56.232589  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:56.317471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:56.317517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:55.282413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:57.283169  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.445079  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:00.942781  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.908918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.409159  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:26:58.864325  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:26:58.879029  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:26:58.879108  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:26:58.918490  992344 cri.go:89] found id: ""
	I0314 19:26:58.918519  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.918526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:26:58.918533  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:26:58.918598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:26:58.963392  992344 cri.go:89] found id: ""
	I0314 19:26:58.963423  992344 logs.go:276] 0 containers: []
	W0314 19:26:58.963431  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:26:58.963437  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:26:58.963502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:26:59.007104  992344 cri.go:89] found id: ""
	I0314 19:26:59.007146  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.007158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:26:59.007166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:26:59.007235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:26:59.050075  992344 cri.go:89] found id: ""
	I0314 19:26:59.050114  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.050127  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:26:59.050138  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:26:59.050204  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:26:59.090262  992344 cri.go:89] found id: ""
	I0314 19:26:59.090289  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.090298  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:26:59.090303  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:26:59.090355  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:26:59.130556  992344 cri.go:89] found id: ""
	I0314 19:26:59.130584  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.130592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:26:59.130598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:26:59.130659  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:26:59.170640  992344 cri.go:89] found id: ""
	I0314 19:26:59.170670  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.170680  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:26:59.170689  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:26:59.170769  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:26:59.206456  992344 cri.go:89] found id: ""
	I0314 19:26:59.206494  992344 logs.go:276] 0 containers: []
	W0314 19:26:59.206503  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:26:59.206513  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:26:59.206533  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:26:59.285760  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:26:59.285781  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:26:59.285793  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:26:59.363143  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:26:59.363182  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:26:59.415614  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:26:59.415655  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:26:59.470619  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:26:59.470661  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:01.987397  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:02.004152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:02.004243  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:02.050022  992344 cri.go:89] found id: ""
	I0314 19:27:02.050056  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.050068  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:02.050075  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:02.050144  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:02.089639  992344 cri.go:89] found id: ""
	I0314 19:27:02.089666  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.089674  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:02.089680  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:02.089740  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:02.128368  992344 cri.go:89] found id: ""
	I0314 19:27:02.128400  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.128409  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:02.128415  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:02.128468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:02.165609  992344 cri.go:89] found id: ""
	I0314 19:27:02.165651  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.165664  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:02.165672  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:02.165745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:02.204317  992344 cri.go:89] found id: ""
	I0314 19:27:02.204347  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.204359  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:02.204367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:02.204436  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:02.247897  992344 cri.go:89] found id: ""
	I0314 19:27:02.247931  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.247943  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:02.247951  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:02.248025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:02.287938  992344 cri.go:89] found id: ""
	I0314 19:27:02.287967  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.287979  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:02.287985  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:02.288057  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:02.324712  992344 cri.go:89] found id: ""
	I0314 19:27:02.324739  992344 logs.go:276] 0 containers: []
	W0314 19:27:02.324751  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:02.324762  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:02.324779  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:02.400908  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:02.400932  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:02.400953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:02.489797  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:02.489830  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:02.540134  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:02.540168  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:02.599093  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:02.599128  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:26:59.283757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:01.782785  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:02.946825  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.447529  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:03.906952  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.909530  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:05.115036  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:05.130479  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:05.130562  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:05.174573  992344 cri.go:89] found id: ""
	I0314 19:27:05.174605  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.174617  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:05.174624  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:05.174689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:05.212508  992344 cri.go:89] found id: ""
	I0314 19:27:05.212535  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.212546  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:05.212554  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:05.212621  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:05.250714  992344 cri.go:89] found id: ""
	I0314 19:27:05.250750  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.250762  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:05.250770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:05.250839  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:05.291691  992344 cri.go:89] found id: ""
	I0314 19:27:05.291714  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.291722  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:05.291728  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:05.291775  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:05.332275  992344 cri.go:89] found id: ""
	I0314 19:27:05.332302  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.332311  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:05.332318  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:05.332384  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:05.370048  992344 cri.go:89] found id: ""
	I0314 19:27:05.370075  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.370084  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:05.370090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:05.370163  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:05.413797  992344 cri.go:89] found id: ""
	I0314 19:27:05.413825  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.413836  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:05.413844  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:05.413909  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:05.454295  992344 cri.go:89] found id: ""
	I0314 19:27:05.454321  992344 logs.go:276] 0 containers: []
	W0314 19:27:05.454329  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:05.454341  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:05.454359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:05.509578  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:05.509614  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:05.525317  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:05.525347  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:05.607550  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:05.607576  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:05.607593  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:05.690865  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:05.690904  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.233183  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:08.249612  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:08.249679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:08.298188  992344 cri.go:89] found id: ""
	I0314 19:27:08.298226  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.298238  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:08.298247  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:08.298310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:08.339102  992344 cri.go:89] found id: ""
	I0314 19:27:08.339132  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.339141  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:08.339148  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:08.339208  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:08.377029  992344 cri.go:89] found id: ""
	I0314 19:27:08.377060  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.377068  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:08.377074  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:08.377131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:08.414418  992344 cri.go:89] found id: ""
	I0314 19:27:08.414450  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.414461  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:08.414468  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:08.414528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:08.454027  992344 cri.go:89] found id: ""
	I0314 19:27:08.454057  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.454068  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:08.454076  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:08.454134  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:08.494818  992344 cri.go:89] found id: ""
	I0314 19:27:08.494847  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.494856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:08.494863  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:08.494927  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:08.534522  992344 cri.go:89] found id: ""
	I0314 19:27:08.534557  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.534567  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:08.534575  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:08.534637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:08.572164  992344 cri.go:89] found id: ""
	I0314 19:27:08.572197  992344 logs.go:276] 0 containers: []
	W0314 19:27:08.572241  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:08.572257  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:08.572275  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:08.588223  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:08.588261  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:08.675851  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:08.675877  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:08.675889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:04.282689  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:06.284132  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.783530  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:07.448817  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:09.944024  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.407848  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:10.408924  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:12.907004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:08.763975  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:08.764014  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:08.813516  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:08.813552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.370525  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:11.385556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:11.385645  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:11.426788  992344 cri.go:89] found id: ""
	I0314 19:27:11.426823  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.426831  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:11.426837  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:11.426910  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:11.465752  992344 cri.go:89] found id: ""
	I0314 19:27:11.465786  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.465794  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:11.465801  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:11.465849  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:11.506855  992344 cri.go:89] found id: ""
	I0314 19:27:11.506890  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.506904  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:11.506912  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:11.506973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:11.548844  992344 cri.go:89] found id: ""
	I0314 19:27:11.548880  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.548891  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:11.548900  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:11.548960  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:11.590828  992344 cri.go:89] found id: ""
	I0314 19:27:11.590861  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.590872  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:11.590880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:11.590952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:11.631863  992344 cri.go:89] found id: ""
	I0314 19:27:11.631892  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.631904  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:11.631913  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:11.631975  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:11.670204  992344 cri.go:89] found id: ""
	I0314 19:27:11.670230  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.670238  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:11.670244  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:11.670293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:11.711946  992344 cri.go:89] found id: ""
	I0314 19:27:11.711980  992344 logs.go:276] 0 containers: []
	W0314 19:27:11.711991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:11.712005  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:11.712026  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:11.766647  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:11.766682  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:11.784449  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:11.784475  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:11.866503  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:11.866536  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:11.866552  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:11.952506  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:11.952538  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:10.787653  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:13.282424  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:11.945870  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.444491  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.909465  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.918004  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:14.502903  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:14.518020  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:14.518084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:14.555486  992344 cri.go:89] found id: ""
	I0314 19:27:14.555528  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.555541  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:14.555552  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:14.555615  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:14.592068  992344 cri.go:89] found id: ""
	I0314 19:27:14.592102  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.592113  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:14.592121  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:14.592186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:14.633353  992344 cri.go:89] found id: ""
	I0314 19:27:14.633408  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.633418  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:14.633425  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:14.633490  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:14.670897  992344 cri.go:89] found id: ""
	I0314 19:27:14.670935  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.670947  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:14.670955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:14.671024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:14.713838  992344 cri.go:89] found id: ""
	I0314 19:27:14.713874  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.713884  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:14.713890  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:14.713957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:14.751113  992344 cri.go:89] found id: ""
	I0314 19:27:14.751143  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.751151  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:14.751158  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:14.751209  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:14.792485  992344 cri.go:89] found id: ""
	I0314 19:27:14.792518  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.792535  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:14.792542  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:14.792606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:14.839250  992344 cri.go:89] found id: ""
	I0314 19:27:14.839284  992344 logs.go:276] 0 containers: []
	W0314 19:27:14.839297  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:14.839309  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:14.839325  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:14.880384  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:14.880421  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:14.941515  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:14.941549  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:14.958810  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:14.958836  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:15.048586  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:15.048610  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:15.048625  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:17.640280  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:17.655841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:17.655901  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:17.698205  992344 cri.go:89] found id: ""
	I0314 19:27:17.698242  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.698254  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:17.698261  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:17.698315  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:17.740854  992344 cri.go:89] found id: ""
	I0314 19:27:17.740892  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.740903  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:17.740910  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:17.740980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:17.783317  992344 cri.go:89] found id: ""
	I0314 19:27:17.783409  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.783426  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:17.783434  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:17.783499  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:17.823514  992344 cri.go:89] found id: ""
	I0314 19:27:17.823541  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.823550  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:17.823556  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:17.823606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:17.859249  992344 cri.go:89] found id: ""
	I0314 19:27:17.859288  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.859301  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:17.859310  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:17.859386  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:17.900636  992344 cri.go:89] found id: ""
	I0314 19:27:17.900670  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.900688  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:17.900703  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:17.900770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:17.939927  992344 cri.go:89] found id: ""
	I0314 19:27:17.939959  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.939970  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:17.939979  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:17.940048  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:17.980507  992344 cri.go:89] found id: ""
	I0314 19:27:17.980539  992344 logs.go:276] 0 containers: []
	W0314 19:27:17.980551  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:17.980563  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:17.980580  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:18.037887  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:18.037925  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:18.054506  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:18.054544  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:18.129987  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:18.130006  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:18.130018  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:18.210364  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:18.210400  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:15.282905  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:17.283421  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:16.943922  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.448400  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:19.406315  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.407142  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:20.758599  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:20.775419  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:20.775480  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:20.814427  992344 cri.go:89] found id: ""
	I0314 19:27:20.814457  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.814469  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:20.814476  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:20.814528  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:20.851020  992344 cri.go:89] found id: ""
	I0314 19:27:20.851056  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.851069  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:20.851077  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:20.851150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:20.894746  992344 cri.go:89] found id: ""
	I0314 19:27:20.894775  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.894784  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:20.894790  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:20.894856  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:20.932852  992344 cri.go:89] found id: ""
	I0314 19:27:20.932884  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.932895  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:20.932903  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:20.932962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:20.977294  992344 cri.go:89] found id: ""
	I0314 19:27:20.977329  992344 logs.go:276] 0 containers: []
	W0314 19:27:20.977341  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:20.977349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:20.977417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:21.018980  992344 cri.go:89] found id: ""
	I0314 19:27:21.019016  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.019027  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:21.019036  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:21.019102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:21.058764  992344 cri.go:89] found id: ""
	I0314 19:27:21.058817  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.058832  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:21.058841  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:21.058915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:21.098126  992344 cri.go:89] found id: ""
	I0314 19:27:21.098168  992344 logs.go:276] 0 containers: []
	W0314 19:27:21.098181  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:21.098194  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:21.098211  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:21.154456  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:21.154490  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:21.170919  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:21.170950  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:21.247945  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:21.247973  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:21.247991  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:21.345152  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:21.345193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:19.782502  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.783005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:21.944293  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.945097  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.442970  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.907276  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:25.907425  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:27.907517  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:23.900146  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:23.917834  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:23.917896  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:23.959759  992344 cri.go:89] found id: ""
	I0314 19:27:23.959787  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.959800  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:23.959808  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:23.959875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:23.999841  992344 cri.go:89] found id: ""
	I0314 19:27:23.999871  992344 logs.go:276] 0 containers: []
	W0314 19:27:23.999880  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:23.999887  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:23.999942  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:24.044031  992344 cri.go:89] found id: ""
	I0314 19:27:24.044063  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.044072  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:24.044078  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:24.044149  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:24.089895  992344 cri.go:89] found id: ""
	I0314 19:27:24.089931  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.089944  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:24.089955  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:24.090023  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:24.131286  992344 cri.go:89] found id: ""
	I0314 19:27:24.131319  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.131331  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:24.131338  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:24.131409  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:24.169376  992344 cri.go:89] found id: ""
	I0314 19:27:24.169408  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.169420  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:24.169428  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:24.169495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:24.215123  992344 cri.go:89] found id: ""
	I0314 19:27:24.215150  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.215159  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:24.215165  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:24.215219  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:24.257440  992344 cri.go:89] found id: ""
	I0314 19:27:24.257476  992344 logs.go:276] 0 containers: []
	W0314 19:27:24.257484  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:24.257494  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:24.257508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.311885  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:24.311916  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:24.326375  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:24.326403  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:24.403176  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:24.403207  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:24.403227  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:24.485890  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:24.485928  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.032675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:27.050221  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:27.050310  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:27.091708  992344 cri.go:89] found id: ""
	I0314 19:27:27.091739  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.091750  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:27.091761  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:27.091828  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:27.135279  992344 cri.go:89] found id: ""
	I0314 19:27:27.135317  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.135329  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:27.135337  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:27.135407  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:27.178163  992344 cri.go:89] found id: ""
	I0314 19:27:27.178194  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.178203  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:27.178209  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:27.178259  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:27.220298  992344 cri.go:89] found id: ""
	I0314 19:27:27.220331  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.220341  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:27.220367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:27.220423  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:27.262087  992344 cri.go:89] found id: ""
	I0314 19:27:27.262122  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.262135  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:27.262143  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:27.262305  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:27.304543  992344 cri.go:89] found id: ""
	I0314 19:27:27.304576  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.304587  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:27.304597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:27.304668  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:27.343860  992344 cri.go:89] found id: ""
	I0314 19:27:27.343889  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.343899  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:27.343905  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:27.343974  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:27.383608  992344 cri.go:89] found id: ""
	I0314 19:27:27.383639  992344 logs.go:276] 0 containers: []
	W0314 19:27:27.383649  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:27.383659  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:27.383673  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:27.398443  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:27.398478  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:27.485215  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:27.485240  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:27.485254  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:27.564067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:27.564110  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:27.608472  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:27.608511  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:24.284517  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:26.783079  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:28.443938  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.445579  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:29.908018  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.406717  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:30.169228  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:30.183876  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:30.183952  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:30.229357  992344 cri.go:89] found id: ""
	I0314 19:27:30.229390  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.229401  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:30.229407  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:30.229474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:30.272970  992344 cri.go:89] found id: ""
	I0314 19:27:30.273007  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.273021  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:30.273030  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:30.273116  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:30.314939  992344 cri.go:89] found id: ""
	I0314 19:27:30.314968  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.314976  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:30.314982  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:30.315031  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:30.350602  992344 cri.go:89] found id: ""
	I0314 19:27:30.350633  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.350644  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:30.350652  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:30.350739  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:30.393907  992344 cri.go:89] found id: ""
	I0314 19:27:30.393939  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.393950  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:30.393958  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:30.394029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:30.431943  992344 cri.go:89] found id: ""
	I0314 19:27:30.431974  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.431983  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:30.431991  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:30.432058  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:30.471873  992344 cri.go:89] found id: ""
	I0314 19:27:30.471900  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.471910  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:30.471918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:30.471981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:30.508842  992344 cri.go:89] found id: ""
	I0314 19:27:30.508865  992344 logs.go:276] 0 containers: []
	W0314 19:27:30.508872  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:30.508882  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:30.508896  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:30.587441  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:30.587471  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:30.587489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:30.670580  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:30.670618  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:30.719846  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:30.719882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:30.779463  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:30.779508  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.296251  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:33.311393  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:33.311452  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:33.351846  992344 cri.go:89] found id: ""
	I0314 19:27:33.351879  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.351889  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:33.351898  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:33.351965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:33.396392  992344 cri.go:89] found id: ""
	I0314 19:27:33.396494  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.396523  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:33.396546  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:33.396637  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:33.451093  992344 cri.go:89] found id: ""
	I0314 19:27:33.451120  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.451130  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:33.451149  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:33.451225  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:33.511427  992344 cri.go:89] found id: ""
	I0314 19:27:33.511474  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.511487  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:33.511495  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:33.511570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:33.560459  992344 cri.go:89] found id: ""
	I0314 19:27:33.560488  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.560500  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:33.560509  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:33.560579  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:33.601454  992344 cri.go:89] found id: ""
	I0314 19:27:33.601491  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.601503  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:33.601512  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:33.601588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:33.640991  992344 cri.go:89] found id: ""
	I0314 19:27:33.641029  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.641042  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:33.641050  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:33.641115  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:33.684359  992344 cri.go:89] found id: ""
	I0314 19:27:33.684390  992344 logs.go:276] 0 containers: []
	W0314 19:27:33.684398  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:33.684412  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:33.684436  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:33.699551  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:33.699583  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:27:29.285022  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:31.782413  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:33.782490  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:32.942996  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.943285  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:34.407243  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.407509  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:27:33.781859  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:33.781893  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:33.781909  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:33.864992  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:33.865036  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:33.911670  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:33.911712  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.466570  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:36.483515  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:36.483611  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:36.522488  992344 cri.go:89] found id: ""
	I0314 19:27:36.522521  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.522533  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:36.522549  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:36.522607  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:36.561676  992344 cri.go:89] found id: ""
	I0314 19:27:36.561714  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.561728  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:36.561737  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:36.561810  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:36.604512  992344 cri.go:89] found id: ""
	I0314 19:27:36.604547  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.604559  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:36.604568  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:36.604640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:36.645387  992344 cri.go:89] found id: ""
	I0314 19:27:36.645416  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.645425  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:36.645430  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:36.645495  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:36.682951  992344 cri.go:89] found id: ""
	I0314 19:27:36.682976  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.682984  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:36.682989  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:36.683040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:36.725464  992344 cri.go:89] found id: ""
	I0314 19:27:36.725517  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.725530  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:36.725538  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:36.725601  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:36.766542  992344 cri.go:89] found id: ""
	I0314 19:27:36.766578  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.766590  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:36.766598  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:36.766663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:36.809745  992344 cri.go:89] found id: ""
	I0314 19:27:36.809773  992344 logs.go:276] 0 containers: []
	W0314 19:27:36.809782  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:36.809791  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:36.809805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:36.863035  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:36.863069  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:36.877162  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:36.877195  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:36.952727  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:36.952747  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:36.952759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:37.035914  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:37.035953  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:35.783567  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:37.786255  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:36.944521  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.445911  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:41.446549  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:38.409392  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:40.914692  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:39.581600  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:39.595798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:39.595875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:39.635374  992344 cri.go:89] found id: ""
	I0314 19:27:39.635406  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.635418  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:39.635426  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:39.635488  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:39.674527  992344 cri.go:89] found id: ""
	I0314 19:27:39.674560  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.674571  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:39.674579  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:39.674649  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:39.714313  992344 cri.go:89] found id: ""
	I0314 19:27:39.714357  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.714370  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:39.714380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:39.714449  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:39.754346  992344 cri.go:89] found id: ""
	I0314 19:27:39.754383  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.754395  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:39.754402  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:39.754468  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:39.799448  992344 cri.go:89] found id: ""
	I0314 19:27:39.799481  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.799493  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:39.799500  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:39.799551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:39.841550  992344 cri.go:89] found id: ""
	I0314 19:27:39.841582  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.841592  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:39.841601  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:39.841673  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:39.878581  992344 cri.go:89] found id: ""
	I0314 19:27:39.878612  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.878624  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:39.878630  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:39.878681  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:39.917419  992344 cri.go:89] found id: ""
	I0314 19:27:39.917444  992344 logs.go:276] 0 containers: []
	W0314 19:27:39.917454  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:39.917465  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:39.917480  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:39.976304  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:39.976340  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:39.993786  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:39.993825  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:40.074428  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.074458  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:40.074481  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:40.156135  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:40.156177  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:42.700758  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:42.716600  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:42.716672  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:42.763646  992344 cri.go:89] found id: ""
	I0314 19:27:42.763682  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.763694  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:42.763702  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:42.763770  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:42.804246  992344 cri.go:89] found id: ""
	I0314 19:27:42.804280  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.804288  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:42.804295  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:42.804360  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:42.847415  992344 cri.go:89] found id: ""
	I0314 19:27:42.847445  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.847455  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:42.847463  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:42.847527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:42.884340  992344 cri.go:89] found id: ""
	I0314 19:27:42.884376  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.884386  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:42.884395  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:42.884464  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:42.923583  992344 cri.go:89] found id: ""
	I0314 19:27:42.923615  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.923634  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:42.923642  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:42.923704  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:42.969164  992344 cri.go:89] found id: ""
	I0314 19:27:42.969195  992344 logs.go:276] 0 containers: []
	W0314 19:27:42.969207  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:42.969215  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:42.969291  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:43.013760  992344 cri.go:89] found id: ""
	I0314 19:27:43.013793  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.013802  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:43.013808  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:43.013881  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:43.056930  992344 cri.go:89] found id: ""
	I0314 19:27:43.056964  992344 logs.go:276] 0 containers: []
	W0314 19:27:43.056976  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:43.056989  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:43.057004  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:43.145067  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:43.145104  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:43.196679  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:43.196714  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:43.252329  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:43.252363  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:43.268635  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:43.268663  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:43.353391  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:40.284010  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:42.784684  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.447800  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.943282  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:43.409130  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.908067  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:45.853793  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:45.867904  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:45.867971  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:45.909352  992344 cri.go:89] found id: ""
	I0314 19:27:45.909376  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.909387  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:45.909394  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:45.909451  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:45.950885  992344 cri.go:89] found id: ""
	I0314 19:27:45.950920  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.950931  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:45.950939  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:45.951006  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:45.987907  992344 cri.go:89] found id: ""
	I0314 19:27:45.987940  992344 logs.go:276] 0 containers: []
	W0314 19:27:45.987951  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:45.987959  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:45.988025  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:46.026894  992344 cri.go:89] found id: ""
	I0314 19:27:46.026930  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.026942  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:46.026950  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:46.027047  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:46.074867  992344 cri.go:89] found id: ""
	I0314 19:27:46.074901  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.074911  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:46.074918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:46.074981  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:46.111516  992344 cri.go:89] found id: ""
	I0314 19:27:46.111551  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.111562  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:46.111570  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:46.111633  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:46.151560  992344 cri.go:89] found id: ""
	I0314 19:27:46.151590  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.151601  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:46.151610  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:46.151674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:46.191684  992344 cri.go:89] found id: ""
	I0314 19:27:46.191719  992344 logs.go:276] 0 containers: []
	W0314 19:27:46.191730  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:46.191742  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:46.191757  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:46.245152  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:46.245189  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:46.261705  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:46.261741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:46.342381  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:46.342409  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:46.342424  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:46.437995  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:46.438031  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:45.283412  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:47.782838  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.443371  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.446353  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.406887  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:50.408726  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.410088  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:48.981814  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:48.998620  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:48.998689  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:49.040608  992344 cri.go:89] found id: ""
	I0314 19:27:49.040643  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.040653  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:49.040659  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:49.040711  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:49.083505  992344 cri.go:89] found id: ""
	I0314 19:27:49.083531  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.083539  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:49.083544  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:49.083606  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:49.127355  992344 cri.go:89] found id: ""
	I0314 19:27:49.127383  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.127391  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:49.127399  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:49.127472  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:49.165694  992344 cri.go:89] found id: ""
	I0314 19:27:49.165726  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.165738  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:49.165746  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:49.165813  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:49.209407  992344 cri.go:89] found id: ""
	I0314 19:27:49.209440  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.209449  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:49.209455  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:49.209516  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:49.250450  992344 cri.go:89] found id: ""
	I0314 19:27:49.250482  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.250493  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:49.250499  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:49.250560  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:49.294041  992344 cri.go:89] found id: ""
	I0314 19:27:49.294070  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.294079  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:49.294085  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:49.294150  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:49.333664  992344 cri.go:89] found id: ""
	I0314 19:27:49.333706  992344 logs.go:276] 0 containers: []
	W0314 19:27:49.333719  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:49.333731  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:49.333749  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:49.348323  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:49.348351  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:49.428896  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:49.428917  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:49.428929  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:49.510395  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:49.510431  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:49.553630  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:49.553669  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.105763  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:52.120888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:52.120956  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:52.158143  992344 cri.go:89] found id: ""
	I0314 19:27:52.158174  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.158188  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:52.158196  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:52.158271  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:52.198254  992344 cri.go:89] found id: ""
	I0314 19:27:52.198285  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.198294  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:52.198299  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:52.198372  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:52.237973  992344 cri.go:89] found id: ""
	I0314 19:27:52.238001  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.238009  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:52.238015  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:52.238066  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:52.283766  992344 cri.go:89] found id: ""
	I0314 19:27:52.283798  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.283809  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:52.283817  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:52.283889  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:52.325861  992344 cri.go:89] found id: ""
	I0314 19:27:52.325896  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.325906  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:52.325914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:52.325983  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:52.367582  992344 cri.go:89] found id: ""
	I0314 19:27:52.367612  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.367622  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:52.367631  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:52.367698  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:52.405009  992344 cri.go:89] found id: ""
	I0314 19:27:52.405043  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.405054  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:52.405062  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:52.405125  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:52.447560  992344 cri.go:89] found id: ""
	I0314 19:27:52.447584  992344 logs.go:276] 0 containers: []
	W0314 19:27:52.447594  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:52.447605  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:52.447620  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:52.519023  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:52.519048  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:52.519062  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:52.603256  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:52.603297  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:52.650926  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:52.650957  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:52.708743  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:52.708784  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:50.284259  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.286540  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:52.944152  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.446257  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:54.910578  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.407194  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:55.225549  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:55.242914  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:55.242992  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:55.284249  992344 cri.go:89] found id: ""
	I0314 19:27:55.284280  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.284291  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:55.284298  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:55.284362  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:55.333784  992344 cri.go:89] found id: ""
	I0314 19:27:55.333821  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.333833  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:55.333840  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:55.333916  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:55.375444  992344 cri.go:89] found id: ""
	I0314 19:27:55.375498  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.375511  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:55.375519  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:55.375598  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:55.416225  992344 cri.go:89] found id: ""
	I0314 19:27:55.416259  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.416269  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:55.416276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:55.416340  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:55.461097  992344 cri.go:89] found id: ""
	I0314 19:27:55.461138  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.461150  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:55.461166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:55.461235  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:55.504621  992344 cri.go:89] found id: ""
	I0314 19:27:55.504659  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.504670  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:55.504679  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:55.504755  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:55.545075  992344 cri.go:89] found id: ""
	I0314 19:27:55.545111  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.545123  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:55.545130  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:55.545221  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:55.584137  992344 cri.go:89] found id: ""
	I0314 19:27:55.584197  992344 logs.go:276] 0 containers: []
	W0314 19:27:55.584235  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:55.584252  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:55.584274  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:55.642705  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:55.642741  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:55.657487  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:55.657516  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:55.738379  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:55.738414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:55.738432  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:55.827582  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:55.827621  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:27:58.374265  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:27:58.389764  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:27:58.389878  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:27:58.431760  992344 cri.go:89] found id: ""
	I0314 19:27:58.431798  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.431810  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:27:58.431818  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:27:58.431880  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:27:58.471389  992344 cri.go:89] found id: ""
	I0314 19:27:58.471415  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.471424  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:27:58.471430  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:27:58.471478  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:27:58.508875  992344 cri.go:89] found id: ""
	I0314 19:27:58.508903  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.508910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:27:58.508916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:27:58.508965  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:27:58.546016  992344 cri.go:89] found id: ""
	I0314 19:27:58.546042  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.546051  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:27:58.546057  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:27:58.546106  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:27:58.586319  992344 cri.go:89] found id: ""
	I0314 19:27:58.586346  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.586354  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:27:58.586360  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:27:58.586414  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:27:58.625381  992344 cri.go:89] found id: ""
	I0314 19:27:58.625411  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.625423  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:27:58.625431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:27:58.625494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:27:58.663016  992344 cri.go:89] found id: ""
	I0314 19:27:58.663047  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.663059  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:27:58.663068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:27:58.663131  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:27:58.703100  992344 cri.go:89] found id: ""
	I0314 19:27:58.703144  992344 logs.go:276] 0 containers: []
	W0314 19:27:58.703159  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:27:58.703172  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:27:58.703190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:27:54.782936  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:56.783549  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.783602  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:57.943515  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:00.443787  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:59.908025  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:01.908119  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:27:58.755081  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:27:58.755116  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:27:58.770547  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:27:58.770577  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:27:58.850354  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:27:58.850379  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:27:58.850395  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:27:58.944115  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:27:58.944152  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.489937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:01.505233  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:01.505309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:01.544381  992344 cri.go:89] found id: ""
	I0314 19:28:01.544417  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.544429  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:01.544437  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:01.544502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:01.582639  992344 cri.go:89] found id: ""
	I0314 19:28:01.582668  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.582676  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:01.582684  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:01.582745  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:01.621926  992344 cri.go:89] found id: ""
	I0314 19:28:01.621957  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.621968  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:01.621976  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:01.622040  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:01.659749  992344 cri.go:89] found id: ""
	I0314 19:28:01.659779  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.659791  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:01.659798  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:01.659869  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:01.696467  992344 cri.go:89] found id: ""
	I0314 19:28:01.696497  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.696505  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:01.696511  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:01.696570  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:01.735273  992344 cri.go:89] found id: ""
	I0314 19:28:01.735301  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.735310  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:01.735316  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:01.735381  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:01.777051  992344 cri.go:89] found id: ""
	I0314 19:28:01.777081  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.777090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:01.777096  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:01.777155  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:01.820851  992344 cri.go:89] found id: ""
	I0314 19:28:01.820883  992344 logs.go:276] 0 containers: []
	W0314 19:28:01.820894  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:01.820911  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:01.820926  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:01.874599  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:01.874632  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:01.888971  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:01.889007  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:01.971786  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:01.971806  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:01.971819  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:02.064070  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:02.064114  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:01.283565  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.782799  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:02.446196  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.944339  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:03.917838  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:06.409597  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:04.610064  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:04.625349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:04.625417  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:04.664254  992344 cri.go:89] found id: ""
	I0314 19:28:04.664284  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.664293  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:04.664299  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:04.664348  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:04.704466  992344 cri.go:89] found id: ""
	I0314 19:28:04.704502  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.704514  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:04.704523  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:04.704588  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:04.745733  992344 cri.go:89] found id: ""
	I0314 19:28:04.745762  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.745773  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:04.745781  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:04.745846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:04.790435  992344 cri.go:89] found id: ""
	I0314 19:28:04.790465  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.790477  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:04.790485  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:04.790550  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:04.829215  992344 cri.go:89] found id: ""
	I0314 19:28:04.829255  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.829268  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:04.829276  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:04.829343  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:04.874200  992344 cri.go:89] found id: ""
	I0314 19:28:04.874234  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.874246  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:04.874253  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:04.874318  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:04.915882  992344 cri.go:89] found id: ""
	I0314 19:28:04.915909  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.915920  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:04.915928  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:04.915994  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:04.954000  992344 cri.go:89] found id: ""
	I0314 19:28:04.954027  992344 logs.go:276] 0 containers: []
	W0314 19:28:04.954038  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:04.954049  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:04.954063  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:04.996511  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:04.996540  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:05.049608  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:05.049644  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:05.064401  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:05.064437  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:05.145169  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:05.145189  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:05.145202  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:07.734535  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:07.765003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:07.765099  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:07.814489  992344 cri.go:89] found id: ""
	I0314 19:28:07.814518  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.814526  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:07.814532  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:07.814595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:07.854337  992344 cri.go:89] found id: ""
	I0314 19:28:07.854368  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.854378  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:07.854384  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:07.854455  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:07.894430  992344 cri.go:89] found id: ""
	I0314 19:28:07.894465  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.894479  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:07.894487  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:07.894551  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:07.939473  992344 cri.go:89] found id: ""
	I0314 19:28:07.939504  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.939515  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:07.939524  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:07.939591  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:07.982584  992344 cri.go:89] found id: ""
	I0314 19:28:07.982627  992344 logs.go:276] 0 containers: []
	W0314 19:28:07.982640  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:07.982649  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:07.982710  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:08.020038  992344 cri.go:89] found id: ""
	I0314 19:28:08.020065  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.020074  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:08.020080  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:08.020138  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:08.058377  992344 cri.go:89] found id: ""
	I0314 19:28:08.058412  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.058423  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:08.058431  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:08.058509  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:08.096241  992344 cri.go:89] found id: ""
	I0314 19:28:08.096273  992344 logs.go:276] 0 containers: []
	W0314 19:28:08.096284  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:08.096294  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:08.096308  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:08.174276  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:08.174315  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:08.221249  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:08.221282  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:08.273899  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:08.273930  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:08.290166  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:08.290193  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:08.382154  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:06.283073  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.784385  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:07.447546  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:09.448168  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:08.906422  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.907030  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:10.882385  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:10.898126  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:10.898200  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:10.939972  992344 cri.go:89] found id: ""
	I0314 19:28:10.940001  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.940012  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:10.940019  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:10.940084  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:10.985154  992344 cri.go:89] found id: ""
	I0314 19:28:10.985187  992344 logs.go:276] 0 containers: []
	W0314 19:28:10.985199  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:10.985212  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:10.985278  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:11.023955  992344 cri.go:89] found id: ""
	I0314 19:28:11.024004  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.024017  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:11.024025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:11.024094  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:11.065508  992344 cri.go:89] found id: ""
	I0314 19:28:11.065534  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.065543  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:11.065549  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:11.065620  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:11.103903  992344 cri.go:89] found id: ""
	I0314 19:28:11.103930  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.103938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:11.103944  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:11.103997  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:11.146820  992344 cri.go:89] found id: ""
	I0314 19:28:11.146856  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.146866  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:11.146873  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:11.146930  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:11.195840  992344 cri.go:89] found id: ""
	I0314 19:28:11.195871  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.195880  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:11.195888  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:11.195957  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:11.237594  992344 cri.go:89] found id: ""
	I0314 19:28:11.237628  992344 logs.go:276] 0 containers: []
	W0314 19:28:11.237647  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:11.237658  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:11.237671  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:11.297323  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:11.297356  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:11.313785  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:11.313815  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:11.393416  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:11.393444  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:11.393461  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:11.472938  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:11.472972  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:11.283364  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.283657  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:11.945477  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.443000  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:13.406341  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:15.905918  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.907047  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:14.025870  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:14.039597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:14.039667  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:14.076786  992344 cri.go:89] found id: ""
	I0314 19:28:14.076822  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.076834  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:14.076842  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:14.076911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:14.114754  992344 cri.go:89] found id: ""
	I0314 19:28:14.114796  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.114815  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:14.114823  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:14.114893  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:14.158360  992344 cri.go:89] found id: ""
	I0314 19:28:14.158396  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.158408  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:14.158417  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:14.158489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:14.208587  992344 cri.go:89] found id: ""
	I0314 19:28:14.208626  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.208638  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:14.208646  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:14.208712  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:14.253013  992344 cri.go:89] found id: ""
	I0314 19:28:14.253049  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.253062  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:14.253071  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:14.253142  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:14.313793  992344 cri.go:89] found id: ""
	I0314 19:28:14.313830  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.313843  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:14.313851  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:14.313918  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:14.352044  992344 cri.go:89] found id: ""
	I0314 19:28:14.352076  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.352087  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:14.352094  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:14.352161  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:14.389393  992344 cri.go:89] found id: ""
	I0314 19:28:14.389427  992344 logs.go:276] 0 containers: []
	W0314 19:28:14.389436  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:14.389446  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:14.389464  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:14.447873  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:14.447914  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:14.462610  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:14.462636  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:14.543393  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:14.543414  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:14.543427  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:14.628147  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:14.628190  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.177617  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:17.193408  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:17.193481  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:17.233133  992344 cri.go:89] found id: ""
	I0314 19:28:17.233161  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.233170  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:17.233183  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:17.233252  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:17.270429  992344 cri.go:89] found id: ""
	I0314 19:28:17.270459  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.270471  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:17.270479  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:17.270559  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:17.309915  992344 cri.go:89] found id: ""
	I0314 19:28:17.309939  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.309947  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:17.309952  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:17.309999  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:17.347157  992344 cri.go:89] found id: ""
	I0314 19:28:17.347188  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.347199  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:17.347206  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:17.347269  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:17.388837  992344 cri.go:89] found id: ""
	I0314 19:28:17.388866  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.388877  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:17.388884  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:17.388948  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:17.425945  992344 cri.go:89] found id: ""
	I0314 19:28:17.425969  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.425977  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:17.425983  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:17.426051  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:17.470291  992344 cri.go:89] found id: ""
	I0314 19:28:17.470320  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.470356  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:17.470365  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:17.470424  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:17.507512  992344 cri.go:89] found id: ""
	I0314 19:28:17.507541  992344 logs.go:276] 0 containers: []
	W0314 19:28:17.507549  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:17.507559  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:17.507575  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:17.550148  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:17.550186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:17.603728  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:17.603759  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:17.619160  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:17.619186  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:17.699649  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:17.699683  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:17.699701  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:15.782780  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:17.783204  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:16.942941  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:18.943420  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:21.450329  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.407873  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.905658  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:20.284486  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:20.300132  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:20.300198  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:20.341566  992344 cri.go:89] found id: ""
	I0314 19:28:20.341608  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.341620  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:20.341629  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:20.341700  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:20.379527  992344 cri.go:89] found id: ""
	I0314 19:28:20.379555  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.379562  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:20.379568  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:20.379640  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:20.425871  992344 cri.go:89] found id: ""
	I0314 19:28:20.425902  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.425910  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:20.425916  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:20.425980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:20.464939  992344 cri.go:89] found id: ""
	I0314 19:28:20.464979  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.464993  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:20.465003  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:20.465075  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:20.500954  992344 cri.go:89] found id: ""
	I0314 19:28:20.500982  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.500993  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:20.501001  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:20.501063  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:20.542049  992344 cri.go:89] found id: ""
	I0314 19:28:20.542080  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.542090  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:20.542098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:20.542178  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:20.577298  992344 cri.go:89] found id: ""
	I0314 19:28:20.577325  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.577333  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:20.577340  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:20.577389  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.618467  992344 cri.go:89] found id: ""
	I0314 19:28:20.618498  992344 logs.go:276] 0 containers: []
	W0314 19:28:20.618511  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:20.618523  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:20.618537  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:20.694238  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:20.694280  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:20.694298  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:20.778845  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:20.778882  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:20.821575  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:20.821606  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:20.876025  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:20.876061  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.391129  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:23.408183  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:23.408276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:23.449128  992344 cri.go:89] found id: ""
	I0314 19:28:23.449169  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.449180  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:23.449186  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:23.449276  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:23.486168  992344 cri.go:89] found id: ""
	I0314 19:28:23.486201  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.486223  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:23.486242  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:23.486299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:23.525452  992344 cri.go:89] found id: ""
	I0314 19:28:23.525484  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.525492  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:23.525498  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:23.525553  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:23.560947  992344 cri.go:89] found id: ""
	I0314 19:28:23.560982  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.561037  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:23.561054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:23.561121  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:23.607261  992344 cri.go:89] found id: ""
	I0314 19:28:23.607298  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.607310  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:23.607317  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:23.607392  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:23.646849  992344 cri.go:89] found id: ""
	I0314 19:28:23.646881  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.646891  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:23.646896  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:23.646962  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:23.684108  992344 cri.go:89] found id: ""
	I0314 19:28:23.684133  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.684140  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:23.684146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:23.684197  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:20.283546  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:22.783059  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.942845  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:25.943049  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:24.905817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.908404  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:23.723284  992344 cri.go:89] found id: ""
	I0314 19:28:23.723320  992344 logs.go:276] 0 containers: []
	W0314 19:28:23.723331  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:23.723343  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:23.723359  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:23.785024  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:23.785066  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:23.801136  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:23.801167  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:23.875721  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:23.875749  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:23.875766  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:23.969377  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:23.969420  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:26.517771  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:26.533260  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:26.533349  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:26.573712  992344 cri.go:89] found id: ""
	I0314 19:28:26.573750  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.573762  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:26.573770  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:26.573846  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:26.610738  992344 cri.go:89] found id: ""
	I0314 19:28:26.610768  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.610777  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:26.610783  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:26.610836  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:26.652014  992344 cri.go:89] found id: ""
	I0314 19:28:26.652041  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.652049  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:26.652054  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:26.652109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:26.687344  992344 cri.go:89] found id: ""
	I0314 19:28:26.687377  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.687389  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:26.687398  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:26.687466  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:26.725897  992344 cri.go:89] found id: ""
	I0314 19:28:26.725926  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.725938  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:26.725945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:26.726008  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:26.772328  992344 cri.go:89] found id: ""
	I0314 19:28:26.772357  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.772367  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:26.772375  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:26.772440  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:26.814721  992344 cri.go:89] found id: ""
	I0314 19:28:26.814757  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.814768  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:26.814776  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:26.814841  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:26.849726  992344 cri.go:89] found id: ""
	I0314 19:28:26.849763  992344 logs.go:276] 0 containers: []
	W0314 19:28:26.849781  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:26.849794  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:26.849811  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:26.932680  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:26.932709  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:26.932725  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:27.011721  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:27.011787  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:27.059121  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:27.059160  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:27.110392  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:27.110430  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:24.783160  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:26.783703  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:27.943562  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.943880  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.405973  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.407122  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:29.625784  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:29.642945  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:29.643024  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:29.681233  992344 cri.go:89] found id: ""
	I0314 19:28:29.681267  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.681279  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:29.681286  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:29.681351  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:29.729735  992344 cri.go:89] found id: ""
	I0314 19:28:29.729764  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.729773  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:29.729779  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:29.729835  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:29.773873  992344 cri.go:89] found id: ""
	I0314 19:28:29.773902  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.773911  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:29.773918  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:29.773973  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:29.815982  992344 cri.go:89] found id: ""
	I0314 19:28:29.816009  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.816019  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:29.816025  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:29.816102  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:29.855295  992344 cri.go:89] found id: ""
	I0314 19:28:29.855328  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.855343  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:29.855349  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:29.855404  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:29.893580  992344 cri.go:89] found id: ""
	I0314 19:28:29.893618  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.893630  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:29.893638  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:29.893705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:29.939721  992344 cri.go:89] found id: ""
	I0314 19:28:29.939752  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.939763  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:29.939770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:29.939837  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:29.978277  992344 cri.go:89] found id: ""
	I0314 19:28:29.978315  992344 logs.go:276] 0 containers: []
	W0314 19:28:29.978328  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:29.978347  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:29.978362  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:30.031723  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:30.031761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:30.046940  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:30.046968  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:30.124190  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:30.124226  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:30.124244  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:30.203448  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:30.203488  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:32.756750  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:32.772599  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:32.772679  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:32.812033  992344 cri.go:89] found id: ""
	I0314 19:28:32.812061  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.812069  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:32.812076  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:32.812165  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:32.855461  992344 cri.go:89] found id: ""
	I0314 19:28:32.855490  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.855501  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:32.855509  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:32.855575  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:32.900644  992344 cri.go:89] found id: ""
	I0314 19:28:32.900675  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.900686  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:32.900694  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:32.900772  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:32.942120  992344 cri.go:89] found id: ""
	I0314 19:28:32.942155  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.942166  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:32.942175  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:32.942238  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:32.981325  992344 cri.go:89] found id: ""
	I0314 19:28:32.981352  992344 logs.go:276] 0 containers: []
	W0314 19:28:32.981360  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:32.981367  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:32.981419  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:33.019732  992344 cri.go:89] found id: ""
	I0314 19:28:33.019767  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.019781  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:33.019789  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:33.019852  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:33.060205  992344 cri.go:89] found id: ""
	I0314 19:28:33.060262  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.060274  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:33.060283  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:33.060350  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:33.100456  992344 cri.go:89] found id: ""
	I0314 19:28:33.100490  992344 logs.go:276] 0 containers: []
	W0314 19:28:33.100517  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:33.100529  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:33.100548  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:33.114637  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:33.114668  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:33.186983  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:33.187010  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:33.187024  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:33.268816  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:33.268856  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:33.314600  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:33.314634  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:29.282840  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:31.783516  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:32.443948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:34.942835  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:33.906912  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.908364  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:35.870832  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:35.886088  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:35.886168  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:35.929548  992344 cri.go:89] found id: ""
	I0314 19:28:35.929580  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.929590  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:35.929598  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:35.929675  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:35.970315  992344 cri.go:89] found id: ""
	I0314 19:28:35.970351  992344 logs.go:276] 0 containers: []
	W0314 19:28:35.970364  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:35.970372  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:35.970438  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:36.010663  992344 cri.go:89] found id: ""
	I0314 19:28:36.010696  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.010716  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:36.010723  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:36.010806  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:36.055521  992344 cri.go:89] found id: ""
	I0314 19:28:36.055558  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.055569  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:36.055578  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:36.055648  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:36.095768  992344 cri.go:89] found id: ""
	I0314 19:28:36.095799  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.095810  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:36.095821  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:36.095875  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:36.132820  992344 cri.go:89] found id: ""
	I0314 19:28:36.132848  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.132856  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:36.132861  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:36.132915  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:36.173162  992344 cri.go:89] found id: ""
	I0314 19:28:36.173196  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.173209  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:36.173217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:36.173287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:36.211796  992344 cri.go:89] found id: ""
	I0314 19:28:36.211822  992344 logs.go:276] 0 containers: []
	W0314 19:28:36.211830  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:36.211839  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:36.211854  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:36.271494  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:36.271536  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:36.289341  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:36.289366  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:36.368331  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:36.368361  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:36.368378  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:36.448945  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:36.448993  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:34.283005  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.286755  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.781678  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:36.943412  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:39.450650  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.407910  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:40.409015  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:42.906420  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:38.995675  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:39.009626  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:39.009705  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:39.051085  992344 cri.go:89] found id: ""
	I0314 19:28:39.051119  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.051128  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:39.051134  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:39.051184  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:39.090167  992344 cri.go:89] found id: ""
	I0314 19:28:39.090201  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.090214  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:39.090221  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:39.090293  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:39.129345  992344 cri.go:89] found id: ""
	I0314 19:28:39.129388  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.129404  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:39.129411  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:39.129475  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:39.166678  992344 cri.go:89] found id: ""
	I0314 19:28:39.166731  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.166741  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:39.166750  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:39.166822  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:39.206329  992344 cri.go:89] found id: ""
	I0314 19:28:39.206368  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.206381  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:39.206389  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:39.206442  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:39.245158  992344 cri.go:89] found id: ""
	I0314 19:28:39.245187  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.245196  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:39.245202  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:39.245253  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:39.289207  992344 cri.go:89] found id: ""
	I0314 19:28:39.289243  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.289259  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:39.289267  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:39.289335  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:39.327437  992344 cri.go:89] found id: ""
	I0314 19:28:39.327462  992344 logs.go:276] 0 containers: []
	W0314 19:28:39.327472  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:39.327484  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:39.327500  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:39.381681  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:39.381724  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:39.397060  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:39.397097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:39.482718  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:39.482744  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:39.482761  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:39.566304  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:39.566349  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.111937  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:42.126968  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:42.127033  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:42.168671  992344 cri.go:89] found id: ""
	I0314 19:28:42.168701  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.168713  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:42.168721  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:42.168792  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:42.213285  992344 cri.go:89] found id: ""
	I0314 19:28:42.213311  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.213319  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:42.213325  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:42.213388  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:42.255036  992344 cri.go:89] found id: ""
	I0314 19:28:42.255075  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.255085  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:42.255090  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:42.255159  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:42.296863  992344 cri.go:89] found id: ""
	I0314 19:28:42.296896  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.296907  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:42.296915  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:42.296978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:42.338346  992344 cri.go:89] found id: ""
	I0314 19:28:42.338402  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.338413  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:42.338421  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:42.338489  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:42.374667  992344 cri.go:89] found id: ""
	I0314 19:28:42.374691  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.374699  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:42.374711  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:42.374774  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:42.412676  992344 cri.go:89] found id: ""
	I0314 19:28:42.412702  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.412713  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:42.412721  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:42.412786  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:42.451093  992344 cri.go:89] found id: ""
	I0314 19:28:42.451125  992344 logs.go:276] 0 containers: []
	W0314 19:28:42.451135  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:42.451147  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:42.451162  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:42.531130  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:42.531176  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:42.576583  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:42.576623  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:42.633675  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:42.633715  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:42.650154  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:42.650188  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:42.731282  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:41.282876  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.283770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:41.942349  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:43.942831  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.943723  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:44.907134  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:46.907817  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:45.231813  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:45.246939  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:45.247029  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:45.289033  992344 cri.go:89] found id: ""
	I0314 19:28:45.289057  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.289066  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:45.289071  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:45.289128  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:45.327007  992344 cri.go:89] found id: ""
	I0314 19:28:45.327034  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.327043  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:45.327048  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:45.327109  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:45.363725  992344 cri.go:89] found id: ""
	I0314 19:28:45.363757  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.363770  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:45.363778  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:45.363833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:45.400775  992344 cri.go:89] found id: ""
	I0314 19:28:45.400808  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.400819  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:45.400826  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:45.400887  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:45.438717  992344 cri.go:89] found id: ""
	I0314 19:28:45.438750  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.438762  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:45.438770  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:45.438833  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:45.483296  992344 cri.go:89] found id: ""
	I0314 19:28:45.483334  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.483349  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:45.483355  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:45.483406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:45.519840  992344 cri.go:89] found id: ""
	I0314 19:28:45.519872  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.519881  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:45.519887  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:45.519939  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:45.560535  992344 cri.go:89] found id: ""
	I0314 19:28:45.560565  992344 logs.go:276] 0 containers: []
	W0314 19:28:45.560577  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:45.560590  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:45.560613  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:45.639453  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:45.639476  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:45.639489  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:45.724224  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:45.724265  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:45.768456  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:45.768494  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:45.828111  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:45.828154  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.345352  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:48.358823  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:48.358879  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:48.401545  992344 cri.go:89] found id: ""
	I0314 19:28:48.401575  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.401586  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:48.401595  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:48.401655  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:48.442031  992344 cri.go:89] found id: ""
	I0314 19:28:48.442062  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.442073  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:48.442081  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:48.442186  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:48.481192  992344 cri.go:89] found id: ""
	I0314 19:28:48.481230  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.481239  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:48.481245  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:48.481309  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:48.522127  992344 cri.go:89] found id: ""
	I0314 19:28:48.522162  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.522171  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:48.522177  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:48.522233  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:48.562763  992344 cri.go:89] found id: ""
	I0314 19:28:48.562791  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.562800  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:48.562806  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:48.562866  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:48.606256  992344 cri.go:89] found id: ""
	I0314 19:28:48.606290  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.606300  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:48.606309  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:48.606376  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:48.645493  992344 cri.go:89] found id: ""
	I0314 19:28:48.645527  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.645539  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:48.645547  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:48.645634  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:48.686145  992344 cri.go:89] found id: ""
	I0314 19:28:48.686177  992344 logs.go:276] 0 containers: []
	W0314 19:28:48.686189  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:48.686202  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:48.686229  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:48.701771  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:48.701812  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:28:45.784389  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.283921  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.443564  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.445062  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:48.909434  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:50.910456  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	W0314 19:28:48.783905  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:48.783931  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:48.783947  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:48.863824  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:48.863868  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:48.919421  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:48.919456  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:51.491562  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:51.507427  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:51.507494  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:51.549290  992344 cri.go:89] found id: ""
	I0314 19:28:51.549325  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.549337  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:51.549344  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:51.549415  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:51.587540  992344 cri.go:89] found id: ""
	I0314 19:28:51.587575  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.587588  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:51.587595  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:51.587663  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:51.629187  992344 cri.go:89] found id: ""
	I0314 19:28:51.629221  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.629229  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:51.629235  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:51.629299  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:51.670884  992344 cri.go:89] found id: ""
	I0314 19:28:51.670913  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.670921  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:51.670927  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:51.670978  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:51.712751  992344 cri.go:89] found id: ""
	I0314 19:28:51.712783  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.712794  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:51.712802  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:51.712873  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:51.751462  992344 cri.go:89] found id: ""
	I0314 19:28:51.751490  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.751499  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:51.751505  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:51.751572  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:51.793049  992344 cri.go:89] found id: ""
	I0314 19:28:51.793079  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.793090  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:51.793098  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:51.793166  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:51.834793  992344 cri.go:89] found id: ""
	I0314 19:28:51.834825  992344 logs.go:276] 0 containers: []
	W0314 19:28:51.834837  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:51.834850  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:51.834871  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:51.851743  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:51.851792  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:51.927748  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:51.927768  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:51.927780  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:52.011674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:52.011718  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:52.067015  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:52.067059  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:50.783127  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.783450  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:52.942964  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:54.945540  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:53.407301  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:55.907357  992056 pod_ready.go:102] pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:56.900342  992056 pod_ready.go:81] duration metric: took 4m0.000959023s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" ...
	E0314 19:28:56.900373  992056 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-w8cj6" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:28:56.900392  992056 pod_ready.go:38] duration metric: took 4m15.050031566s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:28:56.900431  992056 kubeadm.go:591] duration metric: took 4m22.457881244s to restartPrimaryControlPlane
	W0314 19:28:56.900513  992056 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:28:56.900549  992056 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:28:54.623820  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:54.641380  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:54.641459  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:54.699381  992344 cri.go:89] found id: ""
	I0314 19:28:54.699418  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.699430  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:54.699439  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:54.699507  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:54.752793  992344 cri.go:89] found id: ""
	I0314 19:28:54.752843  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.752865  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:54.752873  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:54.752980  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:54.805116  992344 cri.go:89] found id: ""
	I0314 19:28:54.805148  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.805158  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:54.805166  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:54.805231  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:54.843303  992344 cri.go:89] found id: ""
	I0314 19:28:54.843336  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.843346  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:54.843352  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:54.843406  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:54.879789  992344 cri.go:89] found id: ""
	I0314 19:28:54.879822  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.879834  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:54.879840  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:54.879911  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:54.921874  992344 cri.go:89] found id: ""
	I0314 19:28:54.921903  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.921913  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:54.921921  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:54.922005  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:54.966098  992344 cri.go:89] found id: ""
	I0314 19:28:54.966129  992344 logs.go:276] 0 containers: []
	W0314 19:28:54.966137  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:54.966146  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:54.966201  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:55.005963  992344 cri.go:89] found id: ""
	I0314 19:28:55.005995  992344 logs.go:276] 0 containers: []
	W0314 19:28:55.006006  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:55.006019  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:55.006035  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:55.063802  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:55.063838  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:55.079126  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:55.079157  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:55.156174  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:55.156200  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:55.156241  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:55.237471  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:55.237517  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:57.786574  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:28:57.804359  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:28:57.804446  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:28:57.843520  992344 cri.go:89] found id: ""
	I0314 19:28:57.843554  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.843566  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:28:57.843574  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:28:57.843642  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:28:57.883350  992344 cri.go:89] found id: ""
	I0314 19:28:57.883385  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.883398  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:28:57.883408  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:28:57.883502  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:28:57.926544  992344 cri.go:89] found id: ""
	I0314 19:28:57.926578  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.926589  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:28:57.926597  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:28:57.926674  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:28:57.969832  992344 cri.go:89] found id: ""
	I0314 19:28:57.969861  992344 logs.go:276] 0 containers: []
	W0314 19:28:57.969873  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:28:57.969880  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:28:57.969951  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:28:58.021915  992344 cri.go:89] found id: ""
	I0314 19:28:58.021952  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.021964  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:28:58.021972  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:28:58.022043  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:28:58.068004  992344 cri.go:89] found id: ""
	I0314 19:28:58.068045  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.068059  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:28:58.068067  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:28:58.068147  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:28:58.109350  992344 cri.go:89] found id: ""
	I0314 19:28:58.109385  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.109397  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:28:58.109405  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:28:58.109474  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:28:58.149505  992344 cri.go:89] found id: ""
	I0314 19:28:58.149600  992344 logs.go:276] 0 containers: []
	W0314 19:28:58.149617  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:28:58.149631  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:28:58.149648  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:28:58.165051  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:28:58.165097  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:28:58.260306  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:28:58.260334  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:28:58.260360  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:28:58.347229  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:28:58.347270  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:58.394506  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:28:58.394546  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:28:54.783620  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.282809  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:57.444954  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:28:59.450968  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.452967  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:00.965332  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:00.982169  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:29:00.982254  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:29:01.023125  992344 cri.go:89] found id: ""
	I0314 19:29:01.023161  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.023174  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:29:01.023182  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:29:01.023258  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:29:01.073622  992344 cri.go:89] found id: ""
	I0314 19:29:01.073663  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.073688  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:29:01.073697  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:29:01.073762  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:29:01.128431  992344 cri.go:89] found id: ""
	I0314 19:29:01.128459  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.128468  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:29:01.128474  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:29:01.128538  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:29:01.175167  992344 cri.go:89] found id: ""
	I0314 19:29:01.175196  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.175214  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:29:01.175222  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:29:01.175287  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:29:01.219999  992344 cri.go:89] found id: ""
	I0314 19:29:01.220030  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.220041  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:29:01.220049  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:29:01.220114  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:29:01.267917  992344 cri.go:89] found id: ""
	I0314 19:29:01.267946  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.267954  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:29:01.267961  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:29:01.268010  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:29:01.308402  992344 cri.go:89] found id: ""
	I0314 19:29:01.308437  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.308450  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:29:01.308457  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:29:01.308527  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:29:01.354953  992344 cri.go:89] found id: ""
	I0314 19:29:01.354982  992344 logs.go:276] 0 containers: []
	W0314 19:29:01.354991  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:29:01.355001  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:29:01.355016  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:29:01.409088  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:29:01.409131  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:29:01.424936  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:29:01.424965  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:29:01.517636  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:29:01.517673  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:29:01.517691  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:29:01.632674  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:29:01.632731  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:28:59.284185  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:01.783757  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:03.943195  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:05.943902  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:04.185418  992344 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:04.199946  992344 kubeadm.go:591] duration metric: took 4m3.891459486s to restartPrimaryControlPlane
	W0314 19:29:04.200023  992344 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:04.200050  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:05.838695  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.638615727s)
	I0314 19:29:05.838799  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:05.858457  992344 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:05.870547  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:05.881784  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:05.881805  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:05.881853  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:05.892847  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:05.892892  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:05.904430  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:05.914971  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:05.915037  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:05.925984  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.935559  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:05.935615  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:05.947405  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:05.958132  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:05.958177  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:05.968975  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:06.219425  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:04.283772  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:06.785802  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:07.950776  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:10.445766  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:09.282584  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:11.783655  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:12.942948  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.944204  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:14.282089  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:16.282920  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:18.283071  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:17.446142  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:19.447118  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:21.447921  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:20.284298  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:22.782760  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:23.452826  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:25.944013  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:24.785109  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:27.282770  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:28.443907  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:30.447194  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:29.271454  992056 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.370871229s)
	I0314 19:29:29.271543  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:29.288947  992056 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:29:29.299822  992056 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:29:29.309955  992056 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:29:29.309972  992056 kubeadm.go:156] found existing configuration files:
	
	I0314 19:29:29.310004  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:29:29.320229  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:29:29.320285  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:29:29.331509  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:29:29.342985  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:29:29.343046  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:29:29.352805  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.363317  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:29:29.363376  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:29:29.374226  992056 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:29:29.384400  992056 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:29:29.384444  992056 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:29:29.394962  992056 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:29:29.631020  992056 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:29:29.283297  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:31.782029  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:33.783415  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:32.447974  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:34.943668  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:35.786587  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.282404  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.891396  992056 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:29:38.891457  992056 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:29:38.891550  992056 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:29:38.891703  992056 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:29:38.891857  992056 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:29:38.891965  992056 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:29:38.893298  992056 out.go:204]   - Generating certificates and keys ...
	I0314 19:29:38.893420  992056 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:29:38.893526  992056 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:29:38.893637  992056 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:29:38.893727  992056 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:29:38.893833  992056 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:29:38.893931  992056 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:29:38.894042  992056 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:29:38.894147  992056 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:29:38.894249  992056 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:29:38.894351  992056 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:29:38.894413  992056 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:29:38.894483  992056 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:29:38.894564  992056 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:29:38.894648  992056 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:29:38.894740  992056 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:29:38.894825  992056 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:29:38.894942  992056 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:29:38.895027  992056 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:29:38.896425  992056 out.go:204]   - Booting up control plane ...
	I0314 19:29:38.896545  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:29:38.896665  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:29:38.896773  992056 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:29:38.896879  992056 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:29:38.896980  992056 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:29:38.897045  992056 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:29:38.897200  992056 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:29:38.897278  992056 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.504738 seconds
	I0314 19:29:38.897390  992056 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:29:38.897574  992056 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:29:38.897680  992056 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:29:38.897920  992056 kubeadm.go:309] [mark-control-plane] Marking the node embed-certs-992669 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:29:38.897993  992056 kubeadm.go:309] [bootstrap-token] Using token: wr0inu.l2vxagywmdawjzpm
	I0314 19:29:38.899387  992056 out.go:204]   - Configuring RBAC rules ...
	I0314 19:29:38.899518  992056 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:29:38.899597  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:29:38.899790  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:29:38.899950  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:29:38.900097  992056 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:29:38.900225  992056 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:29:38.900389  992056 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:29:38.900449  992056 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:29:38.900514  992056 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:29:38.900523  992056 kubeadm.go:309] 
	I0314 19:29:38.900615  992056 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:29:38.900638  992056 kubeadm.go:309] 
	I0314 19:29:38.900743  992056 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:29:38.900753  992056 kubeadm.go:309] 
	I0314 19:29:38.900788  992056 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:29:38.900872  992056 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:29:38.900945  992056 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:29:38.900954  992056 kubeadm.go:309] 
	I0314 19:29:38.901031  992056 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:29:38.901042  992056 kubeadm.go:309] 
	I0314 19:29:38.901111  992056 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:29:38.901124  992056 kubeadm.go:309] 
	I0314 19:29:38.901202  992056 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:29:38.901312  992056 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:29:38.901433  992056 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:29:38.901446  992056 kubeadm.go:309] 
	I0314 19:29:38.901523  992056 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:29:38.901614  992056 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:29:38.901624  992056 kubeadm.go:309] 
	I0314 19:29:38.901743  992056 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.901842  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:29:38.901862  992056 kubeadm.go:309] 	--control-plane 
	I0314 19:29:38.901865  992056 kubeadm.go:309] 
	I0314 19:29:38.901933  992056 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:29:38.901943  992056 kubeadm.go:309] 
	I0314 19:29:38.902025  992056 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8443 --token wr0inu.l2vxagywmdawjzpm \
	I0314 19:29:38.902185  992056 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:29:38.902212  992056 cni.go:84] Creating CNI manager for ""
	I0314 19:29:38.902222  992056 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:29:38.903643  992056 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:29:36.944055  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:38.945642  992563 pod_ready.go:102] pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:39.437026  992563 pod_ready.go:81] duration metric: took 4m0.000967236s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" ...
	E0314 19:29:39.437057  992563 pod_ready.go:66] WaitExtra: waitPodCondition: timed out waiting 4m0s for pod "metrics-server-57f55c9bc5-t2hhv" in "kube-system" namespace to be "Ready" (will not retry!)
	I0314 19:29:39.437072  992563 pod_ready.go:38] duration metric: took 4m7.55729252s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:39.437098  992563 kubeadm.go:591] duration metric: took 4m15.521374831s to restartPrimaryControlPlane
	W0314 19:29:39.437168  992563 out.go:239] ! Unable to restart control-plane node(s), will reset cluster: <no value>
	I0314 19:29:39.437200  992563 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:29:38.904945  992056 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:29:38.921860  992056 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:29:38.958963  992056 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:29:38.959064  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:38.959065  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes embed-certs-992669 minikube.k8s.io/updated_at=2024_03_14T19_29_38_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=embed-certs-992669 minikube.k8s.io/primary=true
	I0314 19:29:39.310627  992056 ops.go:34] apiserver oom_adj: -16
	I0314 19:29:39.310807  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:39.811730  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.311090  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.811674  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.311488  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:41.811640  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.310976  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:42.811336  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:40.283716  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:42.784841  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:43.311472  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:43.811668  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.311072  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:44.811108  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.311743  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.811197  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.311720  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:46.810955  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.311810  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:47.811633  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:45.282898  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:47.786855  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:48.310845  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:48.811747  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.310862  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:49.811100  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.311383  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:50.811660  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.311496  992056 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:29:51.565143  992056 kubeadm.go:1106] duration metric: took 12.606155275s to wait for elevateKubeSystemPrivileges
	W0314 19:29:51.565200  992056 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:29:51.565210  992056 kubeadm.go:393] duration metric: took 5m17.173193727s to StartCluster
	I0314 19:29:51.565243  992056 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.565344  992056 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:29:51.567430  992056 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:29:51.567800  992056 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.50.213 Port:8443 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:29:51.570366  992056 out.go:177] * Verifying Kubernetes components...
	I0314 19:29:51.567870  992056 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:29:51.568004  992056 config.go:182] Loaded profile config "embed-certs-992669": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:29:51.571834  992056 addons.go:69] Setting storage-provisioner=true in profile "embed-certs-992669"
	I0314 19:29:51.571847  992056 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:29:51.571872  992056 addons.go:69] Setting metrics-server=true in profile "embed-certs-992669"
	I0314 19:29:51.571922  992056 addons.go:234] Setting addon metrics-server=true in "embed-certs-992669"
	W0314 19:29:51.571942  992056 addons.go:243] addon metrics-server should already be in state true
	I0314 19:29:51.571981  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571884  992056 addons.go:234] Setting addon storage-provisioner=true in "embed-certs-992669"
	W0314 19:29:51.572025  992056 addons.go:243] addon storage-provisioner should already be in state true
	I0314 19:29:51.572056  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.571842  992056 addons.go:69] Setting default-storageclass=true in profile "embed-certs-992669"
	I0314 19:29:51.572143  992056 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "embed-certs-992669"
	I0314 19:29:51.572563  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572578  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572597  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572567  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.572611  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.572665  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.595116  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38789
	I0314 19:29:51.595142  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41605
	I0314 19:29:51.595156  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35283
	I0314 19:29:51.595736  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595837  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.595892  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.596363  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596382  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596516  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.596560  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596545  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.596788  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.596895  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597022  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.597213  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.597463  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597488  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.597536  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.597498  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.601587  992056 addons.go:234] Setting addon default-storageclass=true in "embed-certs-992669"
	W0314 19:29:51.601612  992056 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:29:51.601644  992056 host.go:66] Checking if "embed-certs-992669" exists ...
	I0314 19:29:51.602034  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.602069  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.613696  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45049
	I0314 19:29:51.614277  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.614924  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.614957  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.615340  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.615518  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.616192  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36391
	I0314 19:29:51.616643  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.617453  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.617661  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.617680  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.619738  992056 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:29:51.618228  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.621267  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:29:51.621284  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:29:51.621299  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.619984  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.622057  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36525
	I0314 19:29:51.622533  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.623169  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.623184  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.623511  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.623600  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.625179  992056 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:29:51.625183  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.627022  992056 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:51.627052  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:29:51.627074  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.624457  992056 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:29:51.627169  992056 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:29:51.625872  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.627276  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.626052  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.627505  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.628272  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.628593  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.630213  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630764  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.630788  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.630870  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.631065  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.631483  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.631681  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.645022  992056 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40099
	I0314 19:29:51.645562  992056 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:29:51.646147  992056 main.go:141] libmachine: Using API Version  1
	I0314 19:29:51.646172  992056 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:29:51.646551  992056 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:29:51.646766  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetState
	I0314 19:29:51.648424  992056 main.go:141] libmachine: (embed-certs-992669) Calling .DriverName
	I0314 19:29:51.648674  992056 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:51.648690  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:29:51.648702  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHHostname
	I0314 19:29:51.651513  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652188  992056 main.go:141] libmachine: (embed-certs-992669) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:05:e0:54", ip: ""} in network mk-embed-certs-992669: {Iface:virbr2 ExpiryTime:2024-03-14 20:24:18 +0000 UTC Type:0 Mac:52:54:00:05:e0:54 Iaid: IPaddr:192.168.50.213 Prefix:24 Hostname:embed-certs-992669 Clientid:01:52:54:00:05:e0:54}
	I0314 19:29:51.652197  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHPort
	I0314 19:29:51.652220  992056 main.go:141] libmachine: (embed-certs-992669) DBG | domain embed-certs-992669 has defined IP address 192.168.50.213 and MAC address 52:54:00:05:e0:54 in network mk-embed-certs-992669
	I0314 19:29:51.652395  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHKeyPath
	I0314 19:29:51.652552  992056 main.go:141] libmachine: (embed-certs-992669) Calling .GetSSHUsername
	I0314 19:29:51.652655  992056 sshutil.go:53] new ssh client: &{IP:192.168.50.213 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/embed-certs-992669/id_rsa Username:docker}
	I0314 19:29:51.845568  992056 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:29:51.865551  992056 node_ready.go:35] waiting up to 6m0s for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875093  992056 node_ready.go:49] node "embed-certs-992669" has status "Ready":"True"
	I0314 19:29:51.875111  992056 node_ready.go:38] duration metric: took 9.53464ms for node "embed-certs-992669" to be "Ready" ...
	I0314 19:29:51.875123  992056 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:51.883535  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:51.979907  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:29:52.034281  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:29:52.034312  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:29:52.060831  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:29:52.124847  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:29:52.124885  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:29:52.289209  992056 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:52.289239  992056 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:29:52.374833  992056 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:29:50.286539  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:52.298408  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:53.393013  992056 pod_ready.go:92] pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.393048  992056 pod_ready.go:81] duration metric: took 1.509482935s for pod "coredns-5dd5756b68-ngbmj" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.393060  992056 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401449  992056 pod_ready.go:92] pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.401476  992056 pod_ready.go:81] duration metric: took 8.407286ms for pod "coredns-5dd5756b68-tn7lt" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.401486  992056 pod_ready.go:78] waiting up to 6m0s for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406465  992056 pod_ready.go:92] pod "etcd-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.406492  992056 pod_ready.go:81] duration metric: took 4.997468ms for pod "etcd-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.406502  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412923  992056 pod_ready.go:92] pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.412954  992056 pod_ready.go:81] duration metric: took 6.441869ms for pod "kube-apiserver-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.412966  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469519  992056 pod_ready.go:92] pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.469552  992056 pod_ready.go:81] duration metric: took 56.57628ms for pod "kube-controller-manager-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.469566  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.582001  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (1.602041099s)
	I0314 19:29:53.582078  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582096  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582462  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.582484  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582500  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582521  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.582532  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.582795  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.582813  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.582853  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.590184  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.590202  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.590451  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.590487  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.886717  992056 pod_ready.go:92] pod "kube-proxy-hzhsp" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:53.886741  992056 pod_ready.go:81] duration metric: took 417.167569ms for pod "kube-proxy-hzhsp" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.886751  992056 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:53.965815  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (1.904943117s)
	I0314 19:29:53.965875  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.965887  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.966214  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.966240  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.966239  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.966249  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.966305  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.967958  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.968169  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.968187  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.996956  992056 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (1.622074464s)
	I0314 19:29:53.997019  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997033  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997356  992056 main.go:141] libmachine: (embed-certs-992669) DBG | Closing plugin on server side
	I0314 19:29:53.997378  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997400  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997415  992056 main.go:141] libmachine: Making call to close driver server
	I0314 19:29:53.997427  992056 main.go:141] libmachine: (embed-certs-992669) Calling .Close
	I0314 19:29:53.997740  992056 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:29:53.997758  992056 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:29:53.997771  992056 addons.go:470] Verifying addon metrics-server=true in "embed-certs-992669"
	I0314 19:29:53.999390  992056 out.go:177] * Enabled addons: default-storageclass, storage-provisioner, metrics-server
	I0314 19:29:54.000743  992056 addons.go:505] duration metric: took 2.432877042s for enable addons: enabled=[default-storageclass storage-provisioner metrics-server]
	I0314 19:29:54.270407  992056 pod_ready.go:92] pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace has status "Ready":"True"
	I0314 19:29:54.270432  992056 pod_ready.go:81] duration metric: took 383.674695ms for pod "kube-scheduler-embed-certs-992669" in "kube-system" namespace to be "Ready" ...
	I0314 19:29:54.270440  992056 pod_ready.go:38] duration metric: took 2.395303637s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:29:54.270455  992056 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:29:54.270521  992056 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:29:54.293083  992056 api_server.go:72] duration metric: took 2.725234796s to wait for apiserver process to appear ...
	I0314 19:29:54.293113  992056 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:29:54.293164  992056 api_server.go:253] Checking apiserver healthz at https://192.168.50.213:8443/healthz ...
	I0314 19:29:54.302466  992056 api_server.go:279] https://192.168.50.213:8443/healthz returned 200:
	ok
	I0314 19:29:54.304317  992056 api_server.go:141] control plane version: v1.28.4
	I0314 19:29:54.304342  992056 api_server.go:131] duration metric: took 11.220873ms to wait for apiserver health ...
	I0314 19:29:54.304353  992056 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:29:54.479241  992056 system_pods.go:59] 9 kube-system pods found
	I0314 19:29:54.479276  992056 system_pods.go:61] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.479282  992056 system_pods.go:61] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.479288  992056 system_pods.go:61] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.479294  992056 system_pods.go:61] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.479299  992056 system_pods.go:61] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.479305  992056 system_pods.go:61] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.479310  992056 system_pods.go:61] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.479318  992056 system_pods.go:61] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.479325  992056 system_pods.go:61] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.479340  992056 system_pods.go:74] duration metric: took 174.978725ms to wait for pod list to return data ...
	I0314 19:29:54.479358  992056 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:29:54.668682  992056 default_sa.go:45] found service account: "default"
	I0314 19:29:54.668714  992056 default_sa.go:55] duration metric: took 189.346747ms for default service account to be created ...
	I0314 19:29:54.668727  992056 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:29:54.873128  992056 system_pods.go:86] 9 kube-system pods found
	I0314 19:29:54.873161  992056 system_pods.go:89] "coredns-5dd5756b68-ngbmj" [a85a72f9-bb81-4f35-97ec-585c80194c1c] Running
	I0314 19:29:54.873169  992056 system_pods.go:89] "coredns-5dd5756b68-tn7lt" [bf62479b-d5f9-4020-950d-8f3d71e952fa] Running
	I0314 19:29:54.873175  992056 system_pods.go:89] "etcd-embed-certs-992669" [c4a800ce-2d02-4b3e-862f-cd7aedf7754b] Running
	I0314 19:29:54.873184  992056 system_pods.go:89] "kube-apiserver-embed-certs-992669" [6c52de21-e530-464d-a445-24d563874202] Running
	I0314 19:29:54.873189  992056 system_pods.go:89] "kube-controller-manager-embed-certs-992669" [f97cadb3-a669-4236-914f-39f7a42c5814] Running
	I0314 19:29:54.873194  992056 system_pods.go:89] "kube-proxy-hzhsp" [cac20e54-9d37-4f3b-a71a-e92c03f806d8] Running
	I0314 19:29:54.873199  992056 system_pods.go:89] "kube-scheduler-embed-certs-992669" [d2b8a9c8-1a0d-413c-a019-ca8ba395853f] Running
	I0314 19:29:54.873211  992056 system_pods.go:89] "metrics-server-57f55c9bc5-kr2n6" [8ef90636-238c-4334-861a-e40c758d012b] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:29:54.873222  992056 system_pods.go:89] "storage-provisioner" [3f65c725-e834-45db-a417-fd47b421c883] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:29:54.873244  992056 system_pods.go:126] duration metric: took 204.509108ms to wait for k8s-apps to be running ...
	I0314 19:29:54.873256  992056 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:29:54.873311  992056 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:29:54.890288  992056 system_svc.go:56] duration metric: took 17.021036ms WaitForService to wait for kubelet
	I0314 19:29:54.890320  992056 kubeadm.go:576] duration metric: took 3.322477642s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:29:54.890347  992056 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:29:55.069429  992056 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:29:55.069458  992056 node_conditions.go:123] node cpu capacity is 2
	I0314 19:29:55.069506  992056 node_conditions.go:105] duration metric: took 179.148222ms to run NodePressure ...
	I0314 19:29:55.069521  992056 start.go:240] waiting for startup goroutines ...
	I0314 19:29:55.069529  992056 start.go:245] waiting for cluster config update ...
	I0314 19:29:55.069543  992056 start.go:254] writing updated cluster config ...
	I0314 19:29:55.069881  992056 ssh_runner.go:195] Run: rm -f paused
	I0314 19:29:55.129829  992056 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:29:55.131816  992056 out.go:177] * Done! kubectl is now configured to use "embed-certs-992669" cluster and "default" namespace by default
	I0314 19:29:54.784171  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:57.281882  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:29:59.282486  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:01.282694  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:03.782088  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:05.785281  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:08.282878  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:10.782495  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:12.785319  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:11.911432  992563 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (32.474198952s)
	I0314 19:30:11.911536  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:11.930130  992563 ssh_runner.go:195] Run: sudo cp /var/tmp/minikube/kubeadm.yaml.new /var/tmp/minikube/kubeadm.yaml
	I0314 19:30:11.942380  992563 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:30:11.954695  992563 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:30:11.954724  992563 kubeadm.go:156] found existing configuration files:
	
	I0314 19:30:11.954795  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf
	I0314 19:30:11.966696  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:30:11.966772  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:30:11.980074  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf
	I0314 19:30:11.991635  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:30:11.991728  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:30:12.004984  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.016196  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:30:12.016271  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:30:12.027974  992563 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf
	I0314 19:30:12.039057  992563 kubeadm.go:162] "https://control-plane.minikube.internal:8444" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8444 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:30:12.039110  992563 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:30:12.050231  992563 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.28.4:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:30:12.272978  992563 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:30:15.284464  991880 pod_ready.go:102] pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace has status "Ready":"False"
	I0314 19:30:16.784336  991880 pod_ready.go:81] duration metric: took 4m0.008931629s for pod "metrics-server-57f55c9bc5-rhg5r" in "kube-system" namespace to be "Ready" ...
	E0314 19:30:16.784369  991880 pod_ready.go:66] WaitExtra: waitPodCondition: context deadline exceeded
	I0314 19:30:16.784378  991880 pod_ready.go:38] duration metric: took 4m4.558023355s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:16.784398  991880 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:16.784436  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:16.784511  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:16.853550  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:16.853582  991880 cri.go:89] found id: ""
	I0314 19:30:16.853592  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:16.853657  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.858963  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:16.859036  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:16.920573  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:16.920607  991880 cri.go:89] found id: ""
	I0314 19:30:16.920618  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:16.920686  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.926133  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:16.926193  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:16.972150  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:16.972184  991880 cri.go:89] found id: ""
	I0314 19:30:16.972192  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:16.972276  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:16.979169  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:16.979247  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:17.028161  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:17.028191  991880 cri.go:89] found id: ""
	I0314 19:30:17.028202  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:17.028290  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.034573  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:17.034644  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:17.081031  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.081057  991880 cri.go:89] found id: ""
	I0314 19:30:17.081067  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:17.081132  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.086182  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:17.086254  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:17.124758  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.124793  991880 cri.go:89] found id: ""
	I0314 19:30:17.124804  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:17.124892  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.130576  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:17.130636  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:17.180055  991880 cri.go:89] found id: ""
	I0314 19:30:17.180088  991880 logs.go:276] 0 containers: []
	W0314 19:30:17.180100  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:17.180107  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:17.180174  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:17.227751  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.227785  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.227790  991880 cri.go:89] found id: ""
	I0314 19:30:17.227800  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:17.227859  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.232614  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:17.237357  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:17.237385  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:17.300841  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:17.300884  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:17.363775  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:17.363812  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:17.419276  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:17.419328  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:17.461722  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:17.461764  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:17.519105  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:17.519147  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:17.535065  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:17.535099  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:17.588603  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:17.588642  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:17.641770  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:17.641803  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:18.180497  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:18.180561  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:18.250700  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:18.250736  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:18.422627  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:18.422668  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:18.484021  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:18.484059  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.765051  992563 kubeadm.go:309] [init] Using Kubernetes version: v1.28.4
	I0314 19:30:21.765146  992563 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:30:21.765261  992563 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:30:21.765420  992563 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:30:21.765550  992563 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:30:21.765636  992563 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:30:21.767199  992563 out.go:204]   - Generating certificates and keys ...
	I0314 19:30:21.767291  992563 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:30:21.767371  992563 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:30:21.767473  992563 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:30:21.767548  992563 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:30:21.767636  992563 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:30:21.767703  992563 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:30:21.767787  992563 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:30:21.767864  992563 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:30:21.767957  992563 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:30:21.768051  992563 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:30:21.768098  992563 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:30:21.768170  992563 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:30:21.768260  992563 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:30:21.768327  992563 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:30:21.768407  992563 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:30:21.768487  992563 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:30:21.768592  992563 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:30:21.768685  992563 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:30:21.769989  992563 out.go:204]   - Booting up control plane ...
	I0314 19:30:21.770111  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:30:21.770213  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:30:21.770295  992563 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:30:21.770428  992563 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:30:21.770580  992563 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:30:21.770654  992563 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:30:21.770844  992563 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:30:21.770958  992563 kubeadm.go:309] [apiclient] All control plane components are healthy after 6.503303 seconds
	I0314 19:30:21.771087  992563 kubeadm.go:309] [upload-config] Storing the configuration used in ConfigMap "kubeadm-config" in the "kube-system" Namespace
	I0314 19:30:21.771238  992563 kubeadm.go:309] [kubelet] Creating a ConfigMap "kubelet-config" in namespace kube-system with the configuration for the kubelets in the cluster
	I0314 19:30:21.771320  992563 kubeadm.go:309] [upload-certs] Skipping phase. Please see --upload-certs
	I0314 19:30:21.771547  992563 kubeadm.go:309] [mark-control-plane] Marking the node default-k8s-diff-port-440341 as control-plane by adding the labels: [node-role.kubernetes.io/control-plane node.kubernetes.io/exclude-from-external-load-balancers]
	I0314 19:30:21.771634  992563 kubeadm.go:309] [bootstrap-token] Using token: tk83yg.fwvaicx2eo9i68ac
	I0314 19:30:21.773288  992563 out.go:204]   - Configuring RBAC rules ...
	I0314 19:30:21.773428  992563 kubeadm.go:309] [bootstrap-token] Configuring bootstrap tokens, cluster-info ConfigMap, RBAC Roles
	I0314 19:30:21.773532  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to get nodes
	I0314 19:30:21.773732  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow Node Bootstrap tokens to post CSRs in order for nodes to get long term certificate credentials
	I0314 19:30:21.773914  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow the csrapprover controller automatically approve CSRs from a Node Bootstrap Token
	I0314 19:30:21.774068  992563 kubeadm.go:309] [bootstrap-token] Configured RBAC rules to allow certificate rotation for all node client certificates in the cluster
	I0314 19:30:21.774180  992563 kubeadm.go:309] [bootstrap-token] Creating the "cluster-info" ConfigMap in the "kube-public" namespace
	I0314 19:30:21.774338  992563 kubeadm.go:309] [kubelet-finalize] Updating "/etc/kubernetes/kubelet.conf" to point to a rotatable kubelet client certificate and key
	I0314 19:30:21.774402  992563 kubeadm.go:309] [addons] Applied essential addon: CoreDNS
	I0314 19:30:21.774464  992563 kubeadm.go:309] [addons] Applied essential addon: kube-proxy
	I0314 19:30:21.774472  992563 kubeadm.go:309] 
	I0314 19:30:21.774577  992563 kubeadm.go:309] Your Kubernetes control-plane has initialized successfully!
	I0314 19:30:21.774601  992563 kubeadm.go:309] 
	I0314 19:30:21.774705  992563 kubeadm.go:309] To start using your cluster, you need to run the following as a regular user:
	I0314 19:30:21.774715  992563 kubeadm.go:309] 
	I0314 19:30:21.774744  992563 kubeadm.go:309]   mkdir -p $HOME/.kube
	I0314 19:30:21.774833  992563 kubeadm.go:309]   sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
	I0314 19:30:21.774914  992563 kubeadm.go:309]   sudo chown $(id -u):$(id -g) $HOME/.kube/config
	I0314 19:30:21.774930  992563 kubeadm.go:309] 
	I0314 19:30:21.775008  992563 kubeadm.go:309] Alternatively, if you are the root user, you can run:
	I0314 19:30:21.775033  992563 kubeadm.go:309] 
	I0314 19:30:21.775102  992563 kubeadm.go:309]   export KUBECONFIG=/etc/kubernetes/admin.conf
	I0314 19:30:21.775112  992563 kubeadm.go:309] 
	I0314 19:30:21.775191  992563 kubeadm.go:309] You should now deploy a pod network to the cluster.
	I0314 19:30:21.775311  992563 kubeadm.go:309] Run "kubectl apply -f [podnetwork].yaml" with one of the options listed at:
	I0314 19:30:21.775407  992563 kubeadm.go:309]   https://kubernetes.io/docs/concepts/cluster-administration/addons/
	I0314 19:30:21.775416  992563 kubeadm.go:309] 
	I0314 19:30:21.775537  992563 kubeadm.go:309] You can now join any number of control-plane nodes by copying certificate authorities
	I0314 19:30:21.775654  992563 kubeadm.go:309] and service account keys on each node and then running the following as root:
	I0314 19:30:21.775664  992563 kubeadm.go:309] 
	I0314 19:30:21.775774  992563 kubeadm.go:309]   kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.775940  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d \
	I0314 19:30:21.775971  992563 kubeadm.go:309] 	--control-plane 
	I0314 19:30:21.775977  992563 kubeadm.go:309] 
	I0314 19:30:21.776088  992563 kubeadm.go:309] Then you can join any number of worker nodes by running the following on each as root:
	I0314 19:30:21.776096  992563 kubeadm.go:309] 
	I0314 19:30:21.776235  992563 kubeadm.go:309] kubeadm join control-plane.minikube.internal:8444 --token tk83yg.fwvaicx2eo9i68ac \
	I0314 19:30:21.776419  992563 kubeadm.go:309] 	--discovery-token-ca-cert-hash sha256:9854976af6fbd58f68f86bf88684dc67b5f9ae2306d4aa5da587ba2a3778209d 
	I0314 19:30:21.776441  992563 cni.go:84] Creating CNI manager for ""
	I0314 19:30:21.776451  992563 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 19:30:21.778042  992563 out.go:177] * Configuring bridge CNI (Container Networking Interface) ...
	I0314 19:30:21.037583  991880 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:21.055977  991880 api_server.go:72] duration metric: took 4m16.560286182s to wait for apiserver process to appear ...
	I0314 19:30:21.056002  991880 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:21.056039  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:21.056088  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:21.101556  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:21.101583  991880 cri.go:89] found id: ""
	I0314 19:30:21.101591  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:21.101640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.107192  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:21.107259  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:21.156580  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:21.156608  991880 cri.go:89] found id: ""
	I0314 19:30:21.156619  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:21.156681  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.162119  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:21.162277  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:21.204270  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:21.204295  991880 cri.go:89] found id: ""
	I0314 19:30:21.204304  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:21.204369  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.208987  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:21.209057  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:21.258998  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:21.259019  991880 cri.go:89] found id: ""
	I0314 19:30:21.259029  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:21.259094  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.264179  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:21.264264  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:21.314180  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.314213  991880 cri.go:89] found id: ""
	I0314 19:30:21.314225  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:21.314293  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.319693  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:21.319758  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:21.364936  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.364974  991880 cri.go:89] found id: ""
	I0314 19:30:21.364987  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:21.365061  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.370463  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:21.370531  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:21.411930  991880 cri.go:89] found id: ""
	I0314 19:30:21.411963  991880 logs.go:276] 0 containers: []
	W0314 19:30:21.411974  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:21.411982  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:21.412053  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:21.467849  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:21.467875  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.467881  991880 cri.go:89] found id: ""
	I0314 19:30:21.467891  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:21.467954  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.474463  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:21.480322  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:21.480351  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:21.532746  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:21.532778  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:21.599065  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:21.599115  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:21.655522  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:21.655563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:22.097480  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:22.097521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:22.154520  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:22.154563  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:22.175274  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:22.175312  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:22.302831  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:22.302865  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:22.353974  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:22.354017  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:22.392220  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:22.392263  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:22.433863  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:22.433893  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:22.474014  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:22.474047  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:22.522023  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:22.522056  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:21.779300  992563 ssh_runner.go:195] Run: sudo mkdir -p /etc/cni/net.d
	I0314 19:30:21.841175  992563 ssh_runner.go:362] scp memory --> /etc/cni/net.d/1-k8s.conflist (457 bytes)
	I0314 19:30:21.937053  992563 ssh_runner.go:195] Run: /bin/bash -c "cat /proc/$(pgrep kube-apiserver)/oom_adj"
	I0314 19:30:21.937114  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl create clusterrolebinding minikube-rbac --clusterrole=cluster-admin --serviceaccount=kube-system:default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:21.937131  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl --kubeconfig=/var/lib/minikube/kubeconfig label --overwrite nodes default-k8s-diff-port-440341 minikube.k8s.io/updated_at=2024_03_14T19_30_21_0700 minikube.k8s.io/version=v1.32.0 minikube.k8s.io/commit=c6f78a3db54ac629870afb44fb5bc8be9e04a8c7 minikube.k8s.io/name=default-k8s-diff-port-440341 minikube.k8s.io/primary=true
	I0314 19:30:22.169862  992563 ops.go:34] apiserver oom_adj: -16
	I0314 19:30:22.169890  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:22.670591  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.170361  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:23.670786  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.170313  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:24.670779  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.169961  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.670821  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:26.170263  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:25.081288  991880 api_server.go:253] Checking apiserver healthz at https://192.168.39.148:8443/healthz ...
	I0314 19:30:25.086127  991880 api_server.go:279] https://192.168.39.148:8443/healthz returned 200:
	ok
	I0314 19:30:25.087542  991880 api_server.go:141] control plane version: v1.29.0-rc.2
	I0314 19:30:25.087569  991880 api_server.go:131] duration metric: took 4.031556019s to wait for apiserver health ...
	I0314 19:30:25.087578  991880 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:25.087598  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:30:25.087646  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:30:25.136716  991880 cri.go:89] found id: "a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.136743  991880 cri.go:89] found id: ""
	I0314 19:30:25.136754  991880 logs.go:276] 1 containers: [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45]
	I0314 19:30:25.136818  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.142319  991880 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:30:25.142382  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:30:25.188007  991880 cri.go:89] found id: "db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.188031  991880 cri.go:89] found id: ""
	I0314 19:30:25.188040  991880 logs.go:276] 1 containers: [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89]
	I0314 19:30:25.188098  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.192982  991880 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:30:25.193056  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:30:25.235462  991880 cri.go:89] found id: "ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.235485  991880 cri.go:89] found id: ""
	I0314 19:30:25.235493  991880 logs.go:276] 1 containers: [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13]
	I0314 19:30:25.235543  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.239980  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:30:25.240048  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:30:25.288524  991880 cri.go:89] found id: "5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.288545  991880 cri.go:89] found id: ""
	I0314 19:30:25.288554  991880 logs.go:276] 1 containers: [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55]
	I0314 19:30:25.288604  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.294625  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:30:25.294680  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:30:25.332862  991880 cri.go:89] found id: "3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:25.332884  991880 cri.go:89] found id: ""
	I0314 19:30:25.332891  991880 logs.go:276] 1 containers: [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860]
	I0314 19:30:25.332949  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.337918  991880 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:30:25.337993  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:30:25.379537  991880 cri.go:89] found id: "9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:25.379569  991880 cri.go:89] found id: ""
	I0314 19:30:25.379578  991880 logs.go:276] 1 containers: [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0]
	I0314 19:30:25.379640  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.385396  991880 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:30:25.385471  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:30:25.426549  991880 cri.go:89] found id: ""
	I0314 19:30:25.426584  991880 logs.go:276] 0 containers: []
	W0314 19:30:25.426596  991880 logs.go:278] No container was found matching "kindnet"
	I0314 19:30:25.426603  991880 cri.go:54] listing CRI containers in root : {State:all Name:storage-provisioner Namespaces:[]}
	I0314 19:30:25.426676  991880 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=storage-provisioner
	I0314 19:30:25.468021  991880 cri.go:89] found id: "aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:25.468048  991880 cri.go:89] found id: "27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.468054  991880 cri.go:89] found id: ""
	I0314 19:30:25.468065  991880 logs.go:276] 2 containers: [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8]
	I0314 19:30:25.468134  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.473277  991880 ssh_runner.go:195] Run: which crictl
	I0314 19:30:25.477669  991880 logs.go:123] Gathering logs for kubelet ...
	I0314 19:30:25.477690  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:30:25.530477  991880 logs.go:123] Gathering logs for kube-apiserver [a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45] ...
	I0314 19:30:25.530521  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 a09531e613ae5ddcd86f1800cef31e6f95f77723875df8a3482f8581c73fed45"
	I0314 19:30:25.586949  991880 logs.go:123] Gathering logs for etcd [db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89] ...
	I0314 19:30:25.586985  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 db597de214816d6ceaf2f5974f1adcdecf7a77e12d5c9b63568baae8498f7b89"
	I0314 19:30:25.629933  991880 logs.go:123] Gathering logs for kube-scheduler [5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55] ...
	I0314 19:30:25.629972  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 5b8e529f9456244736a39cd9031bbe03f6a8c7b1edc30c47348f0d1ca9240c55"
	I0314 19:30:25.675919  991880 logs.go:123] Gathering logs for storage-provisioner [27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8] ...
	I0314 19:30:25.675955  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 27e79a384706cdbdbd94ade4a3352ffd489add7c06478415e774b7729a8fc2f8"
	I0314 19:30:25.724439  991880 logs.go:123] Gathering logs for container status ...
	I0314 19:30:25.724477  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	I0314 19:30:25.790827  991880 logs.go:123] Gathering logs for dmesg ...
	I0314 19:30:25.790864  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:30:25.808176  991880 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:30:25.808223  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.29.0-rc.2/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	I0314 19:30:25.925583  991880 logs.go:123] Gathering logs for coredns [ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13] ...
	I0314 19:30:25.925621  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 ec0841c5bdfb8a78aa13e4d8cf5b424d0f620249b1286acfd095900561ed0b13"
	I0314 19:30:25.972184  991880 logs.go:123] Gathering logs for kube-proxy [3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860] ...
	I0314 19:30:25.972237  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 3a8800127b84907c72c25730aad59dc4c42138b9e9f10f83c43a01241f584860"
	I0314 19:30:26.018051  991880 logs.go:123] Gathering logs for kube-controller-manager [9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0] ...
	I0314 19:30:26.018083  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 9151eb0c1b33c088cabcb31104f74111540994d8dbecb41cf9241756c2f4b8f0"
	I0314 19:30:26.080100  991880 logs.go:123] Gathering logs for storage-provisioner [aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1] ...
	I0314 19:30:26.080141  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo /usr/bin/crictl logs --tail 400 aeed99a1392eccbf58f6e73e0b7bea5ff1af34ac391c78314ff3cf09de8a9cc1"
	I0314 19:30:26.117235  991880 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:30:26.117276  991880 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:30:29.005596  991880 system_pods.go:59] 8 kube-system pods found
	I0314 19:30:29.005628  991880 system_pods.go:61] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.005632  991880 system_pods.go:61] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.005636  991880 system_pods.go:61] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.005639  991880 system_pods.go:61] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.005642  991880 system_pods.go:61] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.005645  991880 system_pods.go:61] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.005651  991880 system_pods.go:61] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.005657  991880 system_pods.go:61] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.005665  991880 system_pods.go:74] duration metric: took 3.918081505s to wait for pod list to return data ...
	I0314 19:30:29.005672  991880 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:29.008145  991880 default_sa.go:45] found service account: "default"
	I0314 19:30:29.008172  991880 default_sa.go:55] duration metric: took 2.493629ms for default service account to be created ...
	I0314 19:30:29.008181  991880 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:29.013603  991880 system_pods.go:86] 8 kube-system pods found
	I0314 19:30:29.013629  991880 system_pods.go:89] "coredns-76f75df574-mcddh" [d78c0561-04ac-4899-8a97-f3a04a1fa830] Running
	I0314 19:30:29.013641  991880 system_pods.go:89] "etcd-no-preload-731976" [c913a115-fb40-4878-b693-2d6985fee880] Running
	I0314 19:30:29.013646  991880 system_pods.go:89] "kube-apiserver-no-preload-731976" [e121201f-2c6c-48db-8b06-9e6fd4a20ee2] Running
	I0314 19:30:29.013650  991880 system_pods.go:89] "kube-controller-manager-no-preload-731976" [9a016e2a-e31d-46e2-bbcb-3f5f88001dc4] Running
	I0314 19:30:29.013654  991880 system_pods.go:89] "kube-proxy-fkn7b" [e7f519f9-13fd-4e04-ac0c-c9ad8ee67cf9] Running
	I0314 19:30:29.013658  991880 system_pods.go:89] "kube-scheduler-no-preload-731976" [faa0ed51-4e91-45c7-bb16-b71a1d9c60e6] Running
	I0314 19:30:29.013665  991880 system_pods.go:89] "metrics-server-57f55c9bc5-rhg5r" [5753b397-3b41-4fa7-8f7f-65db44a90b06] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:29.013673  991880 system_pods.go:89] "storage-provisioner" [3907dc47-cb82-4df6-8e40-a64bf166b313] Running
	I0314 19:30:29.013683  991880 system_pods.go:126] duration metric: took 5.49627ms to wait for k8s-apps to be running ...
	I0314 19:30:29.013692  991880 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:29.013744  991880 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:29.033211  991880 system_svc.go:56] duration metric: took 19.509127ms WaitForService to wait for kubelet
	I0314 19:30:29.033244  991880 kubeadm.go:576] duration metric: took 4m24.537554048s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:29.033262  991880 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:29.036387  991880 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:29.036409  991880 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:29.036419  991880 node_conditions.go:105] duration metric: took 3.152496ms to run NodePressure ...
	I0314 19:30:29.036432  991880 start.go:240] waiting for startup goroutines ...
	I0314 19:30:29.036441  991880 start.go:245] waiting for cluster config update ...
	I0314 19:30:29.036455  991880 start.go:254] writing updated cluster config ...
	I0314 19:30:29.036755  991880 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:29.086638  991880 start.go:600] kubectl: 1.29.2, cluster: 1.29.0-rc.2 (minor skew: 0)
	I0314 19:30:29.088767  991880 out.go:177] * Done! kubectl is now configured to use "no-preload-731976" cluster and "default" namespace by default
	I0314 19:30:26.670634  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.170774  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:27.670460  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.170571  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:28.670199  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.170324  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:29.670849  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.170021  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:30.670974  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.170929  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:31.670790  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.170127  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:32.670598  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.170188  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.670057  992563 ssh_runner.go:195] Run: sudo /var/lib/minikube/binaries/v1.28.4/kubectl get sa default --kubeconfig=/var/lib/minikube/kubeconfig
	I0314 19:30:33.800524  992563 kubeadm.go:1106] duration metric: took 11.863480183s to wait for elevateKubeSystemPrivileges
	W0314 19:30:33.800567  992563 kubeadm.go:286] apiserver tunnel failed: apiserver port not set
	I0314 19:30:33.800577  992563 kubeadm.go:393] duration metric: took 5m9.94050972s to StartCluster
	I0314 19:30:33.800600  992563 settings.go:142] acquiring lock: {Name:mk310edad572979c28bd0a2740b2f9d3080a14d5 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.800688  992563 settings.go:150] Updating kubeconfig:  /home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:30:33.802311  992563 lock.go:35] WriteFile acquiring /home/jenkins/minikube-integration/18384-942544/kubeconfig: {Name:mkf6d6e86f02afb516578c21cc2e309def90c070 Clock:{} Delay:500ms Timeout:1m0s Cancel:<nil>}
	I0314 19:30:33.802593  992563 start.go:234] Will wait 6m0s for node &{Name: IP:192.168.61.88 Port:8444 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}
	I0314 19:30:33.804369  992563 out.go:177] * Verifying Kubernetes components...
	I0314 19:30:33.802658  992563 addons.go:502] enable addons start: toEnable=map[ambassador:false auto-pause:false cloud-spanner:false csi-hostpath-driver:false dashboard:false default-storageclass:true efk:false freshpod:false gcp-auth:false gvisor:false headlamp:false helm-tiller:false inaccel:false ingress:false ingress-dns:false inspektor-gadget:false istio:false istio-provisioner:false kong:false kubeflow:false kubevirt:false logviewer:false metallb:false metrics-server:true nvidia-device-plugin:false nvidia-driver-installer:false nvidia-gpu-device-plugin:false olm:false pod-security-policy:false portainer:false registry:false registry-aliases:false registry-creds:false storage-provisioner:true storage-provisioner-gluster:false storage-provisioner-rancher:false volumesnapshots:false yakd:false]
	I0314 19:30:33.802827  992563 config.go:182] Loaded profile config "default-k8s-diff-port-440341": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:30:33.806017  992563 addons.go:69] Setting storage-provisioner=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806030  992563 ssh_runner.go:195] Run: sudo systemctl daemon-reload
	I0314 19:30:33.806039  992563 addons.go:69] Setting metrics-server=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806035  992563 addons.go:69] Setting default-storageclass=true in profile "default-k8s-diff-port-440341"
	I0314 19:30:33.806064  992563 addons.go:234] Setting addon storage-provisioner=true in "default-k8s-diff-port-440341"
	I0314 19:30:33.806070  992563 addons.go:234] Setting addon metrics-server=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.806078  992563 addons.go:243] addon storage-provisioner should already be in state true
	W0314 19:30:33.806079  992563 addons.go:243] addon metrics-server should already be in state true
	I0314 19:30:33.806077  992563 addons_storage_classes.go:33] enableOrDisableStorageClasses default-storageclass=true on "default-k8s-diff-port-440341"
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806114  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.806494  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806502  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806518  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806535  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.806588  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.806621  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.822764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35285
	I0314 19:30:33.823247  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.823804  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.823832  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.824297  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.824872  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.824921  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.826625  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40113
	I0314 19:30:33.826764  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39013
	I0314 19:30:33.827172  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827235  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.827776  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827802  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.827915  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.827936  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.828247  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.828442  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.829152  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.829934  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.829979  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.832093  992563 addons.go:234] Setting addon default-storageclass=true in "default-k8s-diff-port-440341"
	W0314 19:30:33.832117  992563 addons.go:243] addon default-storageclass should already be in state true
	I0314 19:30:33.832150  992563 host.go:66] Checking if "default-k8s-diff-port-440341" exists ...
	I0314 19:30:33.832523  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.832567  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.847051  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:45433
	I0314 19:30:33.847665  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.848345  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.848364  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.848731  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.848903  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.850545  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.852435  992563 out.go:177]   - Using image gcr.io/k8s-minikube/storage-provisioner:v5
	I0314 19:30:33.851181  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46639
	I0314 19:30:33.852606  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44445
	I0314 19:30:33.853975  992563 addons.go:426] installing /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:33.853991  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storage-provisioner.yaml (2676 bytes)
	I0314 19:30:33.854010  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.852808  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854344  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.854857  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.854878  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855189  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.855207  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.855286  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855613  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.855925  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.856281  992563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 19:30:33.856302  992563 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 19:30:33.857423  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.857734  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.859391  992563 out.go:177]   - Using image fake.domain/registry.k8s.io/echoserver:1.4
	I0314 19:30:33.858135  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.858383  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.860539  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-apiservice.yaml
	I0314 19:30:33.860554  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-apiservice.yaml (424 bytes)
	I0314 19:30:33.860561  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.860569  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.860651  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.860790  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.860935  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.862967  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863319  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.863339  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.863428  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.863627  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.863738  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.863908  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:33.880826  992563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33443
	I0314 19:30:33.881345  992563 main.go:141] libmachine: () Calling .GetVersion
	I0314 19:30:33.881799  992563 main.go:141] libmachine: Using API Version  1
	I0314 19:30:33.881818  992563 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 19:30:33.882187  992563 main.go:141] libmachine: () Calling .GetMachineName
	I0314 19:30:33.882341  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetState
	I0314 19:30:33.884263  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .DriverName
	I0314 19:30:33.884589  992563 addons.go:426] installing /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:33.884607  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/storageclass.yaml (271 bytes)
	I0314 19:30:33.884625  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHHostname
	I0314 19:30:33.887557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.887921  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:39:02:6d", ip: ""} in network mk-default-k8s-diff-port-440341: {Iface:virbr3 ExpiryTime:2024-03-14 20:17:00 +0000 UTC Type:0 Mac:52:54:00:39:02:6d Iaid: IPaddr:192.168.61.88 Prefix:24 Hostname:default-k8s-diff-port-440341 Clientid:01:52:54:00:39:02:6d}
	I0314 19:30:33.887945  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | domain default-k8s-diff-port-440341 has defined IP address 192.168.61.88 and MAC address 52:54:00:39:02:6d in network mk-default-k8s-diff-port-440341
	I0314 19:30:33.888190  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHPort
	I0314 19:30:33.888503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHKeyPath
	I0314 19:30:33.888670  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .GetSSHUsername
	I0314 19:30:33.888773  992563 sshutil.go:53] new ssh client: &{IP:192.168.61.88 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/default-k8s-diff-port-440341/id_rsa Username:docker}
	I0314 19:30:34.034473  992563 ssh_runner.go:195] Run: sudo systemctl start kubelet
	I0314 19:30:34.090558  992563 node_ready.go:35] waiting up to 6m0s for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129103  992563 node_ready.go:49] node "default-k8s-diff-port-440341" has status "Ready":"True"
	I0314 19:30:34.129135  992563 node_ready.go:38] duration metric: took 38.535795ms for node "default-k8s-diff-port-440341" to be "Ready" ...
	I0314 19:30:34.129148  992563 pod_ready.go:35] extra waiting up to 6m0s for all system-critical pods including labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:34.137612  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:34.186085  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-deployment.yaml
	I0314 19:30:34.186105  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-deployment.yaml (1825 bytes)
	I0314 19:30:34.218932  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml
	I0314 19:30:34.220858  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-rbac.yaml
	I0314 19:30:34.220881  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-rbac.yaml (2175 bytes)
	I0314 19:30:34.235535  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml
	I0314 19:30:34.356161  992563 addons.go:426] installing /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:34.356196  992563 ssh_runner.go:362] scp memory --> /etc/kubernetes/addons/metrics-server-service.yaml (446 bytes)
	I0314 19:30:34.486555  992563 ssh_runner.go:195] Run: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml
	I0314 19:30:36.162952  992563 pod_ready.go:92] pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.162991  992563 pod_ready.go:81] duration metric: took 2.025345367s for pod "coredns-5dd5756b68-g4dzq" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.163005  992563 pod_ready.go:78] waiting up to 6m0s for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171143  992563 pod_ready.go:92] pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.171227  992563 pod_ready.go:81] duration metric: took 8.211826ms for pod "coredns-5dd5756b68-qkhfs" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.171254  992563 pod_ready.go:78] waiting up to 6m0s for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182222  992563 pod_ready.go:92] pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.182246  992563 pod_ready.go:81] duration metric: took 10.963779ms for pod "etcd-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.182255  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196349  992563 pod_ready.go:92] pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.196375  992563 pod_ready.go:81] duration metric: took 14.113911ms for pod "kube-apiserver-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.196385  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201427  992563 pod_ready.go:92] pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.201448  992563 pod_ready.go:81] duration metric: took 5.056279ms for pod "kube-controller-manager-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.201456  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.470967  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storageclass.yaml: (2.235390903s)
	I0314 19:30:36.471092  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471113  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471179  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/storage-provisioner.yaml: (2.252178888s)
	I0314 19:30:36.471229  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471250  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471503  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471529  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471565  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471565  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.471576  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471583  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471605  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.471626  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.471639  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471589  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.471854  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.471876  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.472161  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.472167  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.472186  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.491529  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.491557  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.491867  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.491887  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.546393  992563 pod_ready.go:92] pod "kube-proxy-h7hdc" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.546418  992563 pod_ready.go:81] duration metric: took 344.955471ms for pod "kube-proxy-h7hdc" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.546427  992563 pod_ready.go:78] waiting up to 6m0s for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.619091  992563 ssh_runner.go:235] Completed: sudo KUBECONFIG=/var/lib/minikube/kubeconfig /var/lib/minikube/binaries/v1.28.4/kubectl apply -f /etc/kubernetes/addons/metrics-apiservice.yaml -f /etc/kubernetes/addons/metrics-server-deployment.yaml -f /etc/kubernetes/addons/metrics-server-rbac.yaml -f /etc/kubernetes/addons/metrics-server-service.yaml: (2.132488028s)
	I0314 19:30:36.619147  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619165  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619443  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619459  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619469  992563 main.go:141] libmachine: Making call to close driver server
	I0314 19:30:36.619477  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) Calling .Close
	I0314 19:30:36.619809  992563 main.go:141] libmachine: Successfully made call to close driver server
	I0314 19:30:36.619839  992563 main.go:141] libmachine: Making call to close connection to plugin binary
	I0314 19:30:36.619847  992563 main.go:141] libmachine: (default-k8s-diff-port-440341) DBG | Closing plugin on server side
	I0314 19:30:36.619851  992563 addons.go:470] Verifying addon metrics-server=true in "default-k8s-diff-port-440341"
	I0314 19:30:36.621595  992563 out.go:177] * Enabled addons: storage-provisioner, default-storageclass, metrics-server
	I0314 19:30:36.622935  992563 addons.go:505] duration metric: took 2.820276683s for enable addons: enabled=[storage-provisioner default-storageclass metrics-server]
	I0314 19:30:36.950079  992563 pod_ready.go:92] pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace has status "Ready":"True"
	I0314 19:30:36.950112  992563 pod_ready.go:81] duration metric: took 403.67651ms for pod "kube-scheduler-default-k8s-diff-port-440341" in "kube-system" namespace to be "Ready" ...
	I0314 19:30:36.950124  992563 pod_ready.go:38] duration metric: took 2.820962547s for extra waiting for all system-critical and pods with labels [k8s-app=kube-dns component=etcd component=kube-apiserver component=kube-controller-manager k8s-app=kube-proxy component=kube-scheduler] to be "Ready" ...
	I0314 19:30:36.950145  992563 api_server.go:52] waiting for apiserver process to appear ...
	I0314 19:30:36.950212  992563 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 19:30:37.024696  992563 api_server.go:72] duration metric: took 3.222061457s to wait for apiserver process to appear ...
	I0314 19:30:37.024728  992563 api_server.go:88] waiting for apiserver healthz status ...
	I0314 19:30:37.024754  992563 api_server.go:253] Checking apiserver healthz at https://192.168.61.88:8444/healthz ...
	I0314 19:30:37.031369  992563 api_server.go:279] https://192.168.61.88:8444/healthz returned 200:
	ok
	I0314 19:30:37.033114  992563 api_server.go:141] control plane version: v1.28.4
	I0314 19:30:37.033137  992563 api_server.go:131] duration metric: took 8.40225ms to wait for apiserver health ...
	I0314 19:30:37.033145  992563 system_pods.go:43] waiting for kube-system pods to appear ...
	I0314 19:30:37.157219  992563 system_pods.go:59] 9 kube-system pods found
	I0314 19:30:37.157256  992563 system_pods.go:61] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.157263  992563 system_pods.go:61] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.157269  992563 system_pods.go:61] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.157276  992563 system_pods.go:61] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.157282  992563 system_pods.go:61] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.157286  992563 system_pods.go:61] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.157291  992563 system_pods.go:61] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.157300  992563 system_pods.go:61] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.157308  992563 system_pods.go:61] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.157323  992563 system_pods.go:74] duration metric: took 124.170301ms to wait for pod list to return data ...
	I0314 19:30:37.157336  992563 default_sa.go:34] waiting for default service account to be created ...
	I0314 19:30:37.343573  992563 default_sa.go:45] found service account: "default"
	I0314 19:30:37.343602  992563 default_sa.go:55] duration metric: took 186.253477ms for default service account to be created ...
	I0314 19:30:37.343620  992563 system_pods.go:116] waiting for k8s-apps to be running ...
	I0314 19:30:37.549907  992563 system_pods.go:86] 9 kube-system pods found
	I0314 19:30:37.549947  992563 system_pods.go:89] "coredns-5dd5756b68-g4dzq" [9e849b06-74f4-4d8e-95b1-16136db8faee] Running
	I0314 19:30:37.549955  992563 system_pods.go:89] "coredns-5dd5756b68-qkhfs" [ac0f6749-fd4a-41ea-9b02-5ce5ea58e3a8] Running
	I0314 19:30:37.549962  992563 system_pods.go:89] "etcd-default-k8s-diff-port-440341" [f0b3dc38-e2c6-4703-a300-97e57d03a7ed] Running
	I0314 19:30:37.549969  992563 system_pods.go:89] "kube-apiserver-default-k8s-diff-port-440341" [63c343c3-df13-47b8-9388-875e98f65bb4] Running
	I0314 19:30:37.549977  992563 system_pods.go:89] "kube-controller-manager-default-k8s-diff-port-440341" [8ae81fad-f61d-47da-bcdb-77b19ba7b265] Running
	I0314 19:30:37.549982  992563 system_pods.go:89] "kube-proxy-h7hdc" [e2e6b4f3-8ba9-4f0a-8e04-b289699b1017] Running
	I0314 19:30:37.549987  992563 system_pods.go:89] "kube-scheduler-default-k8s-diff-port-440341" [11f55fd7-f716-4f6b-86cd-41f0101230da] Running
	I0314 19:30:37.549998  992563 system_pods.go:89] "metrics-server-57f55c9bc5-p7s4d" [1b13ae7e-62a0-429c-bf4f-0f38b222db7e] Pending / Ready:ContainersNotReady (containers with unready status: [metrics-server]) / ContainersReady:ContainersNotReady (containers with unready status: [metrics-server])
	I0314 19:30:37.550007  992563 system_pods.go:89] "storage-provisioner" [daafd1bc-b1f1-4dab-b615-8364e22f984f] Pending / Ready:ContainersNotReady (containers with unready status: [storage-provisioner]) / ContainersReady:ContainersNotReady (containers with unready status: [storage-provisioner])
	I0314 19:30:37.550022  992563 system_pods.go:126] duration metric: took 206.393584ms to wait for k8s-apps to be running ...
	I0314 19:30:37.550039  992563 system_svc.go:44] waiting for kubelet service to be running ....
	I0314 19:30:37.550098  992563 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:30:37.568337  992563 system_svc.go:56] duration metric: took 18.290339ms WaitForService to wait for kubelet
	I0314 19:30:37.568369  992563 kubeadm.go:576] duration metric: took 3.765742034s to wait for: map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true]
	I0314 19:30:37.568396  992563 node_conditions.go:102] verifying NodePressure condition ...
	I0314 19:30:37.747892  992563 node_conditions.go:122] node storage ephemeral capacity is 17734596Ki
	I0314 19:30:37.747931  992563 node_conditions.go:123] node cpu capacity is 2
	I0314 19:30:37.747945  992563 node_conditions.go:105] duration metric: took 179.543321ms to run NodePressure ...
	I0314 19:30:37.747959  992563 start.go:240] waiting for startup goroutines ...
	I0314 19:30:37.747969  992563 start.go:245] waiting for cluster config update ...
	I0314 19:30:37.747984  992563 start.go:254] writing updated cluster config ...
	I0314 19:30:37.748310  992563 ssh_runner.go:195] Run: rm -f paused
	I0314 19:30:37.800491  992563 start.go:600] kubectl: 1.29.2, cluster: 1.28.4 (minor skew: 1)
	I0314 19:30:37.802410  992563 out.go:177] * Done! kubectl is now configured to use "default-k8s-diff-port-440341" cluster and "default" namespace by default
	I0314 19:31:02.414037  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:31:02.414153  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:31:02.415801  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:02.415891  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:02.415997  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:02.416110  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:02.416236  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:02.416324  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:02.418205  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:02.418304  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:02.418377  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:02.418455  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:02.418519  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:02.418629  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:02.418704  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:02.418793  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:02.418892  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:02.419018  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:02.419129  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:02.419184  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:02.419270  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:02.419347  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:02.419421  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:02.419528  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:02.419624  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:02.419808  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:02.419914  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:02.419951  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:02.420007  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:02.421520  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:02.421603  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:02.421669  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:02.421753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:02.421844  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:02.422023  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:02.422092  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:02.422167  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422353  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422458  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422731  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.422812  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.422970  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423032  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423228  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423333  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:02.423479  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:02.423488  992344 kubeadm.go:309] 
	I0314 19:31:02.423519  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:31:02.423552  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:31:02.423558  992344 kubeadm.go:309] 
	I0314 19:31:02.423601  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:31:02.423643  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:31:02.423770  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:31:02.423780  992344 kubeadm.go:309] 
	I0314 19:31:02.423912  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:31:02.423949  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:31:02.424001  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:31:02.424012  992344 kubeadm.go:309] 
	I0314 19:31:02.424141  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:31:02.424269  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:31:02.424280  992344 kubeadm.go:309] 
	I0314 19:31:02.424405  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:31:02.424481  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:31:02.424542  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:31:02.424606  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:31:02.424638  992344 kubeadm.go:309] 
	W0314 19:31:02.424800  992344 out.go:239] ! initialization failed, will try again: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	I0314 19:31:02.424887  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force"
	I0314 19:31:03.827325  992344 ssh_runner.go:235] Completed: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm reset --cri-socket /var/run/crio/crio.sock --force": (1.402406647s)
	I0314 19:31:03.827421  992344 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 19:31:03.845125  992344 ssh_runner.go:195] Run: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf
	I0314 19:31:03.856796  992344 kubeadm.go:154] config check failed, skipping stale config cleanup: sudo ls -la /etc/kubernetes/admin.conf /etc/kubernetes/kubelet.conf /etc/kubernetes/controller-manager.conf /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	ls: cannot access '/etc/kubernetes/admin.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/kubelet.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/controller-manager.conf': No such file or directory
	ls: cannot access '/etc/kubernetes/scheduler.conf': No such file or directory
	I0314 19:31:03.856821  992344 kubeadm.go:156] found existing configuration files:
	
	I0314 19:31:03.856875  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf
	I0314 19:31:03.868304  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/admin.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/admin.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/admin.conf: No such file or directory
	I0314 19:31:03.868359  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/admin.conf
	I0314 19:31:03.879608  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf
	I0314 19:31:03.891002  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/kubelet.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/kubelet.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/kubelet.conf: No such file or directory
	I0314 19:31:03.891068  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/kubelet.conf
	I0314 19:31:03.902543  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.913159  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/controller-manager.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/controller-manager.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/controller-manager.conf: No such file or directory
	I0314 19:31:03.913212  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/controller-manager.conf
	I0314 19:31:03.926194  992344 ssh_runner.go:195] Run: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf
	I0314 19:31:03.937276  992344 kubeadm.go:162] "https://control-plane.minikube.internal:8443" may not be in /etc/kubernetes/scheduler.conf - will remove: sudo grep https://control-plane.minikube.internal:8443 /etc/kubernetes/scheduler.conf: Process exited with status 2
	stdout:
	
	stderr:
	grep: /etc/kubernetes/scheduler.conf: No such file or directory
	I0314 19:31:03.937344  992344 ssh_runner.go:195] Run: sudo rm -f /etc/kubernetes/scheduler.conf
	I0314 19:31:03.949719  992344 ssh_runner.go:286] Start: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem"
	I0314 19:31:04.026772  992344 kubeadm.go:309] [init] Using Kubernetes version: v1.20.0
	I0314 19:31:04.026841  992344 kubeadm.go:309] [preflight] Running pre-flight checks
	I0314 19:31:04.195658  992344 kubeadm.go:309] [preflight] Pulling images required for setting up a Kubernetes cluster
	I0314 19:31:04.195816  992344 kubeadm.go:309] [preflight] This might take a minute or two, depending on the speed of your internet connection
	I0314 19:31:04.195973  992344 kubeadm.go:309] [preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	I0314 19:31:04.416776  992344 kubeadm.go:309] [certs] Using certificateDir folder "/var/lib/minikube/certs"
	I0314 19:31:04.418845  992344 out.go:204]   - Generating certificates and keys ...
	I0314 19:31:04.418937  992344 kubeadm.go:309] [certs] Using existing ca certificate authority
	I0314 19:31:04.419023  992344 kubeadm.go:309] [certs] Using existing apiserver certificate and key on disk
	I0314 19:31:04.419125  992344 kubeadm.go:309] [certs] Using existing apiserver-kubelet-client certificate and key on disk
	I0314 19:31:04.419222  992344 kubeadm.go:309] [certs] Using existing front-proxy-ca certificate authority
	I0314 19:31:04.419321  992344 kubeadm.go:309] [certs] Using existing front-proxy-client certificate and key on disk
	I0314 19:31:04.419386  992344 kubeadm.go:309] [certs] Using existing etcd/ca certificate authority
	I0314 19:31:04.419869  992344 kubeadm.go:309] [certs] Using existing etcd/server certificate and key on disk
	I0314 19:31:04.420376  992344 kubeadm.go:309] [certs] Using existing etcd/peer certificate and key on disk
	I0314 19:31:04.420786  992344 kubeadm.go:309] [certs] Using existing etcd/healthcheck-client certificate and key on disk
	I0314 19:31:04.421265  992344 kubeadm.go:309] [certs] Using existing apiserver-etcd-client certificate and key on disk
	I0314 19:31:04.421447  992344 kubeadm.go:309] [certs] Using the existing "sa" key
	I0314 19:31:04.421551  992344 kubeadm.go:309] [kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	I0314 19:31:04.472916  992344 kubeadm.go:309] [kubeconfig] Writing "admin.conf" kubeconfig file
	I0314 19:31:04.572160  992344 kubeadm.go:309] [kubeconfig] Writing "kubelet.conf" kubeconfig file
	I0314 19:31:04.802131  992344 kubeadm.go:309] [kubeconfig] Writing "controller-manager.conf" kubeconfig file
	I0314 19:31:04.892115  992344 kubeadm.go:309] [kubeconfig] Writing "scheduler.conf" kubeconfig file
	I0314 19:31:04.908810  992344 kubeadm.go:309] [kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	I0314 19:31:04.910191  992344 kubeadm.go:309] [kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	I0314 19:31:04.910266  992344 kubeadm.go:309] [kubelet-start] Starting the kubelet
	I0314 19:31:05.076124  992344 kubeadm.go:309] [control-plane] Using manifest folder "/etc/kubernetes/manifests"
	I0314 19:31:05.078423  992344 out.go:204]   - Booting up control plane ...
	I0314 19:31:05.078564  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-apiserver"
	I0314 19:31:05.083626  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-controller-manager"
	I0314 19:31:05.083753  992344 kubeadm.go:309] [control-plane] Creating static Pod manifest for "kube-scheduler"
	I0314 19:31:05.084096  992344 kubeadm.go:309] [etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	I0314 19:31:05.088164  992344 kubeadm.go:309] [wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	I0314 19:31:45.090977  992344 kubeadm.go:309] [kubelet-check] Initial timeout of 40s passed.
	I0314 19:31:45.091099  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:45.091378  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:31:50.091571  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:31:50.091787  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:00.093031  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:00.093312  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:32:20.094443  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:32:20.094650  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096632  992344 kubeadm.go:309] [kubelet-check] It seems like the kubelet isn't running or healthy.
	I0314 19:33:00.096929  992344 kubeadm.go:309] [kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	I0314 19:33:00.096948  992344 kubeadm.go:309] 
	I0314 19:33:00.096986  992344 kubeadm.go:309] 	Unfortunately, an error has occurred:
	I0314 19:33:00.097021  992344 kubeadm.go:309] 		timed out waiting for the condition
	I0314 19:33:00.097030  992344 kubeadm.go:309] 
	I0314 19:33:00.097059  992344 kubeadm.go:309] 	This error is likely caused by:
	I0314 19:33:00.097088  992344 kubeadm.go:309] 		- The kubelet is not running
	I0314 19:33:00.097203  992344 kubeadm.go:309] 		- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	I0314 19:33:00.097228  992344 kubeadm.go:309] 
	I0314 19:33:00.097345  992344 kubeadm.go:309] 	If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
	I0314 19:33:00.097394  992344 kubeadm.go:309] 		- 'systemctl status kubelet'
	I0314 19:33:00.097451  992344 kubeadm.go:309] 		- 'journalctl -xeu kubelet'
	I0314 19:33:00.097461  992344 kubeadm.go:309] 
	I0314 19:33:00.097572  992344 kubeadm.go:309] 	Additionally, a control plane component may have crashed or exited when started by the container runtime.
	I0314 19:33:00.097673  992344 kubeadm.go:309] 	To troubleshoot, list all containers using your preferred container runtimes CLI.
	I0314 19:33:00.097685  992344 kubeadm.go:309] 
	I0314 19:33:00.097865  992344 kubeadm.go:309] 	Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
	I0314 19:33:00.098003  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
	I0314 19:33:00.098105  992344 kubeadm.go:309] 		Once you have found the failing container, you can inspect its logs with:
	I0314 19:33:00.098202  992344 kubeadm.go:309] 		- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	I0314 19:33:00.098221  992344 kubeadm.go:309] 
	I0314 19:33:00.098939  992344 kubeadm.go:309] 	[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	I0314 19:33:00.099069  992344 kubeadm.go:309] error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	I0314 19:33:00.099160  992344 kubeadm.go:309] To see the stack trace of this error execute with --v=5 or higher
	I0314 19:33:00.099254  992344 kubeadm.go:393] duration metric: took 7m59.845612375s to StartCluster
	I0314 19:33:00.099339  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-apiserver Namespaces:[]}
	I0314 19:33:00.099422  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-apiserver
	I0314 19:33:00.151833  992344 cri.go:89] found id: ""
	I0314 19:33:00.151861  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.151869  992344 logs.go:278] No container was found matching "kube-apiserver"
	I0314 19:33:00.151876  992344 cri.go:54] listing CRI containers in root : {State:all Name:etcd Namespaces:[]}
	I0314 19:33:00.151943  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=etcd
	I0314 19:33:00.196473  992344 cri.go:89] found id: ""
	I0314 19:33:00.196508  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.196519  992344 logs.go:278] No container was found matching "etcd"
	I0314 19:33:00.196526  992344 cri.go:54] listing CRI containers in root : {State:all Name:coredns Namespaces:[]}
	I0314 19:33:00.196595  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=coredns
	I0314 19:33:00.233150  992344 cri.go:89] found id: ""
	I0314 19:33:00.233193  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.233207  992344 logs.go:278] No container was found matching "coredns"
	I0314 19:33:00.233217  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-scheduler Namespaces:[]}
	I0314 19:33:00.233292  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-scheduler
	I0314 19:33:00.273142  992344 cri.go:89] found id: ""
	I0314 19:33:00.273183  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.273196  992344 logs.go:278] No container was found matching "kube-scheduler"
	I0314 19:33:00.273205  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-proxy Namespaces:[]}
	I0314 19:33:00.273274  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-proxy
	I0314 19:33:00.311472  992344 cri.go:89] found id: ""
	I0314 19:33:00.311510  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.311523  992344 logs.go:278] No container was found matching "kube-proxy"
	I0314 19:33:00.311544  992344 cri.go:54] listing CRI containers in root : {State:all Name:kube-controller-manager Namespaces:[]}
	I0314 19:33:00.311618  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kube-controller-manager
	I0314 19:33:00.352110  992344 cri.go:89] found id: ""
	I0314 19:33:00.352138  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.352146  992344 logs.go:278] No container was found matching "kube-controller-manager"
	I0314 19:33:00.352152  992344 cri.go:54] listing CRI containers in root : {State:all Name:kindnet Namespaces:[]}
	I0314 19:33:00.352230  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kindnet
	I0314 19:33:00.399016  992344 cri.go:89] found id: ""
	I0314 19:33:00.399050  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.399060  992344 logs.go:278] No container was found matching "kindnet"
	I0314 19:33:00.399068  992344 cri.go:54] listing CRI containers in root : {State:all Name:kubernetes-dashboard Namespaces:[]}
	I0314 19:33:00.399140  992344 ssh_runner.go:195] Run: sudo crictl ps -a --quiet --name=kubernetes-dashboard
	I0314 19:33:00.436808  992344 cri.go:89] found id: ""
	I0314 19:33:00.436844  992344 logs.go:276] 0 containers: []
	W0314 19:33:00.436857  992344 logs.go:278] No container was found matching "kubernetes-dashboard"
	I0314 19:33:00.436871  992344 logs.go:123] Gathering logs for kubelet ...
	I0314 19:33:00.436889  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u kubelet -n 400"
	I0314 19:33:00.487696  992344 logs.go:123] Gathering logs for dmesg ...
	I0314 19:33:00.487732  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo dmesg -PH -L=never --level warn,err,crit,alert,emerg | tail -n 400"
	I0314 19:33:00.503591  992344 logs.go:123] Gathering logs for describe nodes ...
	I0314 19:33:00.503624  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig"
	W0314 19:33:00.586980  992344 logs.go:130] failed describe nodes: command: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	 output: 
	** stderr ** 
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	** /stderr **
	I0314 19:33:00.587014  992344 logs.go:123] Gathering logs for CRI-O ...
	I0314 19:33:00.587033  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo journalctl -u crio -n 400"
	I0314 19:33:00.697747  992344 logs.go:123] Gathering logs for container status ...
	I0314 19:33:00.697805  992344 ssh_runner.go:195] Run: /bin/bash -c "sudo `which crictl || echo crictl` ps -a || sudo docker ps -a"
	W0314 19:33:00.767728  992344 out.go:364] Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	W0314 19:33:00.767799  992344 out.go:239] * 
	W0314 19:33:00.768013  992344 out.go:239] X Error starting cluster: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.768052  992344 out.go:239] * 
	W0314 19:33:00.769333  992344 out.go:239] ╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯
	I0314 19:33:00.772897  992344 out.go:177] 
	W0314 19:33:00.774102  992344 out.go:239] X Exiting due to K8S_KUBELET_NOT_RUNNING: wait: /bin/bash -c "sudo env PATH="/var/lib/minikube/binaries/v1.20.0:$PATH" kubeadm init --config /var/tmp/minikube/kubeadm.yaml  --ignore-preflight-errors=DirAvailable--etc-kubernetes-manifests,DirAvailable--var-lib-minikube,DirAvailable--var-lib-minikube-etcd,FileAvailable--etc-kubernetes-manifests-kube-scheduler.yaml,FileAvailable--etc-kubernetes-manifests-kube-apiserver.yaml,FileAvailable--etc-kubernetes-manifests-kube-controller-manager.yaml,FileAvailable--etc-kubernetes-manifests-etcd.yaml,Port-10250,Swap,NumCPU,Mem": Process exited with status 1
	stdout:
	[init] Using Kubernetes version: v1.20.0
	[preflight] Running pre-flight checks
	[preflight] Pulling images required for setting up a Kubernetes cluster
	[preflight] This might take a minute or two, depending on the speed of your internet connection
	[preflight] You can also perform this action in beforehand using 'kubeadm config images pull'
	[certs] Using certificateDir folder "/var/lib/minikube/certs"
	[certs] Using existing ca certificate authority
	[certs] Using existing apiserver certificate and key on disk
	[certs] Using existing apiserver-kubelet-client certificate and key on disk
	[certs] Using existing front-proxy-ca certificate authority
	[certs] Using existing front-proxy-client certificate and key on disk
	[certs] Using existing etcd/ca certificate authority
	[certs] Using existing etcd/server certificate and key on disk
	[certs] Using existing etcd/peer certificate and key on disk
	[certs] Using existing etcd/healthcheck-client certificate and key on disk
	[certs] Using existing apiserver-etcd-client certificate and key on disk
	[certs] Using the existing "sa" key
	[kubeconfig] Using kubeconfig folder "/etc/kubernetes"
	[kubeconfig] Writing "admin.conf" kubeconfig file
	[kubeconfig] Writing "kubelet.conf" kubeconfig file
	[kubeconfig] Writing "controller-manager.conf" kubeconfig file
	[kubeconfig] Writing "scheduler.conf" kubeconfig file
	[kubelet-start] Writing kubelet environment file with flags to file "/var/lib/kubelet/kubeadm-flags.env"
	[kubelet-start] Writing kubelet configuration to file "/var/lib/kubelet/config.yaml"
	[kubelet-start] Starting the kubelet
	[control-plane] Using manifest folder "/etc/kubernetes/manifests"
	[control-plane] Creating static Pod manifest for "kube-apiserver"
	[control-plane] Creating static Pod manifest for "kube-controller-manager"
	[control-plane] Creating static Pod manifest for "kube-scheduler"
	[etcd] Creating static Pod manifest for local etcd in "/etc/kubernetes/manifests"
	[wait-control-plane] Waiting for the kubelet to boot up the control plane as static Pods from directory "/etc/kubernetes/manifests". This can take up to 4m0s
	[kubelet-check] Initial timeout of 40s passed.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	[kubelet-check] It seems like the kubelet isn't running or healthy.
	[kubelet-check] The HTTP call equal to 'curl -sSL http://localhost:10248/healthz' failed with error: Get "http://localhost:10248/healthz": dial tcp 127.0.0.1:10248: connect: connection refused.
	
		Unfortunately, an error has occurred:
			timed out waiting for the condition
	
		This error is likely caused by:
			- The kubelet is not running
			- The kubelet is unhealthy due to a misconfiguration of the node in some way (required cgroups disabled)
	
		If you are on a systemd-powered system, you can try to troubleshoot the error with the following commands:
			- 'systemctl status kubelet'
			- 'journalctl -xeu kubelet'
	
		Additionally, a control plane component may have crashed or exited when started by the container runtime.
		To troubleshoot, list all containers using your preferred container runtimes CLI.
	
		Here is one example how you may list all Kubernetes containers running in cri-o/containerd using crictl:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock ps -a | grep kube | grep -v pause'
			Once you have found the failing container, you can inspect its logs with:
			- 'crictl --runtime-endpoint /var/run/crio/crio.sock logs CONTAINERID'
	
	
	stderr:
		[WARNING Service-Kubelet]: kubelet service is not enabled, please run 'systemctl enable kubelet.service'
	error execution phase wait-control-plane: couldn't initialize a Kubernetes cluster
	To see the stack trace of this error execute with --v=5 or higher
	
	W0314 19:33:00.774165  992344 out.go:239] * Suggestion: Check output of 'journalctl -xeu kubelet', try passing --extra-config=kubelet.cgroup-driver=systemd to minikube start
	W0314 19:33:00.774200  992344 out.go:239] * Related issue: https://github.com/kubernetes/minikube/issues/4172
	I0314 19:33:00.775839  992344 out.go:177] 
	
	
	==> CRI-O <==
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.830341156Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445435830307344,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=efe846b8-0bbd-4c86-8dc4-334781f43345 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.831017121Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=e50b7712-d7d8-4f48-9eea-6cf7e4c48872 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.831070485Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=e50b7712-d7d8-4f48-9eea-6cf7e4c48872 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.831162191Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=e50b7712-d7d8-4f48-9eea-6cf7e4c48872 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.867296601Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=df17b39d-a46e-4e41-b696-a263fdbcb931 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.867367874Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=df17b39d-a46e-4e41-b696-a263fdbcb931 name=/runtime.v1.RuntimeService/Version
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.868611313Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=e88f3ea8-ba1c-40b8-bd5a-c891192f9863 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.869036314Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445435869003855,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=e88f3ea8-ba1c-40b8-bd5a-c891192f9863 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.869750044Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=9dfead5d-3787-4c4c-bc48-0f663c62df60 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.869798620Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=9dfead5d-3787-4c4c-bc48-0f663c62df60 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.869832820Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=9dfead5d-3787-4c4c-bc48-0f663c62df60 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.907760261Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=3db1c380-97f9-44c1-9c7a-68dfa02fcf5e name=/runtime.v1.RuntimeService/Version
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.907861538Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=3db1c380-97f9-44c1-9c7a-68dfa02fcf5e name=/runtime.v1.RuntimeService/Version
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.909342447Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=34a07eba-6943-4917-8566-683ccd05336f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.909780005Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445435909757592,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=34a07eba-6943-4917-8566-683ccd05336f name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.910625544Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=aad0ce98-7a33-4879-a847-c775d7dbe987 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.910707069Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=aad0ce98-7a33-4879-a847-c775d7dbe987 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.910752671Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=aad0ce98-7a33-4879-a847-c775d7dbe987 name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.945902022Z" level=debug msg="Request: &VersionRequest{Version:,}" file="otel-collector/interceptors.go:62" id=58a0a29c-4c04-4c5b-93b8-3a1d319a87ce name=/runtime.v1.RuntimeService/Version
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.946011895Z" level=debug msg="Response: &VersionResponse{Version:0.1.0,RuntimeName:cri-o,RuntimeVersion:1.29.1,RuntimeApiVersion:v1,}" file="otel-collector/interceptors.go:74" id=58a0a29c-4c04-4c5b-93b8-3a1d319a87ce name=/runtime.v1.RuntimeService/Version
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.947823219Z" level=debug msg="Request: &ImageFsInfoRequest{}" file="otel-collector/interceptors.go:62" id=a89ba5ca-682a-4dd6-957c-393435edd2e8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.948319986Z" level=debug msg="Response: &ImageFsInfoResponse{ImageFilesystems:[]*FilesystemUsage{&FilesystemUsage{Timestamp:1710445435948297595,FsId:&FilesystemIdentifier{Mountpoint:/var/lib/containers/storage/overlay-images,},UsedBytes:&UInt64Value{Value:112689,},InodesUsed:&UInt64Value{Value:57,},},},ContainerFilesystems:[]*FilesystemUsage{},}" file="otel-collector/interceptors.go:74" id=a89ba5ca-682a-4dd6-957c-393435edd2e8 name=/runtime.v1.ImageService/ImageFsInfo
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.948807854Z" level=debug msg="Request: &ListContainersRequest{Filter:&ContainerFilter{Id:,State:nil,PodSandboxId:,LabelSelector:map[string]string{},},}" file="otel-collector/interceptors.go:62" id=1c499d18-5d85-49d5-99fc-bf03594fecca name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.948884347Z" level=debug msg="No filters were applied, returning full container list" file="server/container_list.go:60" id=1c499d18-5d85-49d5-99fc-bf03594fecca name=/runtime.v1.RuntimeService/ListContainers
	Mar 14 19:43:55 old-k8s-version-968094 crio[643]: time="2024-03-14 19:43:55.948952132Z" level=debug msg="Response: &ListContainersResponse{Containers:[]*Container{},}" file="otel-collector/interceptors.go:74" id=1c499d18-5d85-49d5-99fc-bf03594fecca name=/runtime.v1.RuntimeService/ListContainers
	
	
	==> container status <==
	CONTAINER           IMAGE               CREATED             STATE               NAME                ATTEMPT             POD ID              POD
	
	
	==> describe nodes <==
	command /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig" failed with error: /bin/bash -c "sudo /var/lib/minikube/binaries/v1.20.0/kubectl describe nodes --kubeconfig=/var/lib/minikube/kubeconfig": Process exited with status 1
	stdout:
	
	stderr:
	The connection to the server localhost:8443 was refused - did you specify the right host or port?
	
	
	==> dmesg <==
	[Mar14 19:24] You have booted with nomodeset. This means your GPU drivers are DISABLED
	[  +0.000001] Any video related functionality will be severely degraded, and you may not even be able to suspend the system properly
	[  +0.000000] Unless you actually understand what nomodeset does, you should reboot without enabling it
	[  +0.056380] Spectre V2 : WARNING: Unprivileged eBPF is enabled with eIBRS on, data leaks possible via Spectre v2 BHB attacks!
	[  +0.043046] acpi PNP0A03:00: fail to add MMCONFIG information, can't access extended PCI configuration space under this bridge.
	[  +4.730295] platform regulatory.0: Direct firmware load for regulatory.db failed with error -2
	[  +3.383233] systemd-fstab-generator[114]: Ignoring "noauto" option for root device
	[  +1.688071] NFSD: Using /var/lib/nfs/v4recovery as the NFSv4 state recovery directory
	[  +0.000009] NFSD: unable to find recovery directory /var/lib/nfs/v4recovery
	[  +0.000002] NFSD: Unable to initialize client recovery tracking! (-2)
	[ +10.628591] systemd-fstab-generator[566]: Ignoring "noauto" option for root device
	[  +0.061195] kauditd_printk_skb: 1 callbacks suppressed
	[  +0.075909] systemd-fstab-generator[578]: Ignoring "noauto" option for root device
	[  +0.206378] systemd-fstab-generator[592]: Ignoring "noauto" option for root device
	[  +0.171886] systemd-fstab-generator[605]: Ignoring "noauto" option for root device
	[  +0.322045] systemd-fstab-generator[630]: Ignoring "noauto" option for root device
	[  +7.118781] systemd-fstab-generator[828]: Ignoring "noauto" option for root device
	[  +0.064097] kauditd_printk_skb: 130 callbacks suppressed
	[Mar14 19:25] systemd-fstab-generator[955]: Ignoring "noauto" option for root device
	[  +8.646452] kauditd_printk_skb: 46 callbacks suppressed
	[Mar14 19:29] systemd-fstab-generator[4997]: Ignoring "noauto" option for root device
	[Mar14 19:31] systemd-fstab-generator[5276]: Ignoring "noauto" option for root device
	[  +0.079625] kauditd_printk_skb: 12 callbacks suppressed
	
	
	==> kernel <==
	 19:43:56 up 19 min,  0 users,  load average: 0.00, 0.05, 0.06
	Linux old-k8s-version-968094 5.10.207 #1 SMP Wed Mar 13 22:01:28 UTC 2024 x86_64 GNU/Linux
	PRETTY_NAME="Buildroot 2023.02.9"
	
	
	==> kubelet <==
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:138 +0x185
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run.func1()
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:222 +0x70
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0xc0006696f0)
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:155 +0x5f
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000bffef0, 0x4f0ac20, 0xc000b2eb40, 0x1, 0xc0001000c0)
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:156 +0xad
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]: k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache.(*Reflector).Run(0xc0000d90a0, 0xc0001000c0)
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/client-go/tools/cache/reflector.go:220 +0x1c5
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).StartWithChannel.func1()
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:56 +0x2e
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]: k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1(0xc000b48540, 0xc000ac9820)
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:73 +0x51
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]: created by k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait.(*Group).Start
	Mar 14 19:43:52 old-k8s-version-968094 kubelet[6717]:         /workspace/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:71 +0x65
	Mar 14 19:43:52 old-k8s-version-968094 systemd[1]: kubelet.service: Main process exited, code=exited, status=255/EXCEPTION
	Mar 14 19:43:52 old-k8s-version-968094 systemd[1]: kubelet.service: Failed with result 'exit-code'.
	Mar 14 19:43:53 old-k8s-version-968094 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 133.
	Mar 14 19:43:53 old-k8s-version-968094 systemd[1]: Stopped kubelet: The Kubernetes Node Agent.
	Mar 14 19:43:53 old-k8s-version-968094 systemd[1]: Started kubelet: The Kubernetes Node Agent.
	Mar 14 19:43:53 old-k8s-version-968094 kubelet[6726]: I0314 19:43:53.490325    6726 server.go:416] Version: v1.20.0
	Mar 14 19:43:53 old-k8s-version-968094 kubelet[6726]: I0314 19:43:53.490789    6726 server.go:837] Client rotation is on, will bootstrap in background
	Mar 14 19:43:53 old-k8s-version-968094 kubelet[6726]: I0314 19:43:53.496542    6726 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem".
	Mar 14 19:43:53 old-k8s-version-968094 kubelet[6726]: I0314 19:43:53.501795    6726 dynamic_cafile_content.go:167] Starting client-ca-bundle::/var/lib/minikube/certs/ca.crt
	Mar 14 19:43:53 old-k8s-version-968094 kubelet[6726]: W0314 19:43:53.501968    6726 manager.go:159] Cannot detect current cgroup on cgroup v2
	

                                                
                                                
-- /stdout --
helpers_test.go:254: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-968094 -n old-k8s-version-968094
helpers_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 2 (269.949949ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
helpers_test.go:254: status error: exit status 2 (may be ok)
helpers_test.go:256: "old-k8s-version-968094" apiserver is not running, skipping kubectl commands (state="Stopped")
--- FAIL: TestStartStop/group/old-k8s-version/serial/AddonExistsAfterStop (109.94s)

                                                
                                    

Test pass (256/325)

Order passed test Duration
3 TestDownloadOnly/v1.20.0/json-events 7.95
4 TestDownloadOnly/v1.20.0/preload-exists 0
8 TestDownloadOnly/v1.20.0/LogsDuration 0.08
9 TestDownloadOnly/v1.20.0/DeleteAll 0.15
10 TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds 0.14
12 TestDownloadOnly/v1.28.4/json-events 5.01
13 TestDownloadOnly/v1.28.4/preload-exists 0
17 TestDownloadOnly/v1.28.4/LogsDuration 0.08
18 TestDownloadOnly/v1.28.4/DeleteAll 0.15
19 TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds 0.14
21 TestDownloadOnly/v1.29.0-rc.2/json-events 5.63
22 TestDownloadOnly/v1.29.0-rc.2/preload-exists 0
26 TestDownloadOnly/v1.29.0-rc.2/LogsDuration 0.08
27 TestDownloadOnly/v1.29.0-rc.2/DeleteAll 0.15
28 TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds 0.13
30 TestBinaryMirror 0.58
31 TestOffline 60.17
34 TestAddons/PreSetup/EnablingAddonOnNonExistingCluster 0.07
35 TestAddons/PreSetup/DisablingAddonOnNonExistingCluster 0.07
36 TestAddons/Setup 140.54
38 TestAddons/parallel/Registry 17.93
40 TestAddons/parallel/InspektorGadget 11.94
41 TestAddons/parallel/MetricsServer 6.95
42 TestAddons/parallel/HelmTiller 9.72
44 TestAddons/parallel/CSI 78.63
45 TestAddons/parallel/Headlamp 15.84
46 TestAddons/parallel/CloudSpanner 6.65
47 TestAddons/parallel/LocalPath 54.65
48 TestAddons/parallel/NvidiaDevicePlugin 6.7
49 TestAddons/parallel/Yakd 6.01
52 TestAddons/serial/GCPAuth/Namespaces 0.12
54 TestCertOptions 84.76
55 TestCertExpiration 276.85
57 TestForceSystemdFlag 54.69
58 TestForceSystemdEnv 70.13
60 TestKVMDriverInstallOrUpdate 1.77
64 TestErrorSpam/setup 48.29
65 TestErrorSpam/start 0.39
66 TestErrorSpam/status 0.79
67 TestErrorSpam/pause 1.66
68 TestErrorSpam/unpause 1.73
69 TestErrorSpam/stop 5.85
72 TestFunctional/serial/CopySyncFile 0
73 TestFunctional/serial/StartWithProxy 89.22
74 TestFunctional/serial/AuditLog 0
75 TestFunctional/serial/SoftStart 46.35
76 TestFunctional/serial/KubeContext 0.04
77 TestFunctional/serial/KubectlGetPods 0.07
80 TestFunctional/serial/CacheCmd/cache/add_remote 3.31
81 TestFunctional/serial/CacheCmd/cache/add_local 1.1
82 TestFunctional/serial/CacheCmd/cache/CacheDelete 0.06
83 TestFunctional/serial/CacheCmd/cache/list 0.06
84 TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node 0.24
85 TestFunctional/serial/CacheCmd/cache/cache_reload 1.72
86 TestFunctional/serial/CacheCmd/cache/delete 0.12
87 TestFunctional/serial/MinikubeKubectlCmd 0.12
88 TestFunctional/serial/MinikubeKubectlCmdDirectly 0.12
89 TestFunctional/serial/ExtraConfig 35.01
90 TestFunctional/serial/ComponentHealth 0.07
91 TestFunctional/serial/LogsCmd 1.53
92 TestFunctional/serial/LogsFileCmd 1.58
93 TestFunctional/serial/InvalidService 4.17
95 TestFunctional/parallel/ConfigCmd 0.42
96 TestFunctional/parallel/DashboardCmd 27.25
97 TestFunctional/parallel/DryRun 0.34
98 TestFunctional/parallel/InternationalLanguage 0.17
99 TestFunctional/parallel/StatusCmd 1.12
103 TestFunctional/parallel/ServiceCmdConnect 11.62
104 TestFunctional/parallel/AddonsCmd 0.17
105 TestFunctional/parallel/PersistentVolumeClaim 44.58
107 TestFunctional/parallel/SSHCmd 0.47
108 TestFunctional/parallel/CpCmd 1.5
109 TestFunctional/parallel/MySQL 25.56
110 TestFunctional/parallel/FileSync 0.25
111 TestFunctional/parallel/CertSync 1.46
115 TestFunctional/parallel/NodeLabels 0.07
117 TestFunctional/parallel/NonActiveRuntimeDisabled 0.53
119 TestFunctional/parallel/License 0.16
129 TestFunctional/parallel/ServiceCmd/DeployApp 10.2
130 TestFunctional/parallel/ProfileCmd/profile_not_create 0.39
131 TestFunctional/parallel/ProfileCmd/profile_list 0.41
132 TestFunctional/parallel/ProfileCmd/profile_json_output 0.38
133 TestFunctional/parallel/MountCmd/any-port 6.67
134 TestFunctional/parallel/MountCmd/specific-port 2.03
135 TestFunctional/parallel/ServiceCmd/List 0.36
136 TestFunctional/parallel/ServiceCmd/JSONOutput 0.5
137 TestFunctional/parallel/MountCmd/VerifyCleanup 1.57
138 TestFunctional/parallel/ServiceCmd/HTTPS 0.44
139 TestFunctional/parallel/ServiceCmd/Format 0.44
140 TestFunctional/parallel/ServiceCmd/URL 0.36
141 TestFunctional/parallel/Version/short 0.06
142 TestFunctional/parallel/Version/components 0.74
143 TestFunctional/parallel/ImageCommands/ImageListShort 0.35
144 TestFunctional/parallel/ImageCommands/ImageListTable 0.26
145 TestFunctional/parallel/ImageCommands/ImageListJson 0.31
146 TestFunctional/parallel/ImageCommands/ImageListYaml 0.35
147 TestFunctional/parallel/ImageCommands/ImageBuild 2.32
148 TestFunctional/parallel/ImageCommands/Setup 1.19
150 TestFunctional/parallel/ImageCommands/ImageReloadDaemon 3.2
151 TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon 10.35
152 TestFunctional/parallel/UpdateContextCmd/no_changes 0.13
153 TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster 0.11
154 TestFunctional/parallel/UpdateContextCmd/no_clusters 0.12
155 TestFunctional/parallel/ImageCommands/ImageSaveToFile 1.28
156 TestFunctional/parallel/ImageCommands/ImageRemove 0.61
157 TestFunctional/parallel/ImageCommands/ImageLoadFromFile 1.78
158 TestFunctional/parallel/ImageCommands/ImageSaveDaemon 1.16
159 TestFunctional/delete_addon-resizer_images 0.07
160 TestFunctional/delete_my-image_image 0.01
161 TestFunctional/delete_minikube_cached_images 0.02
165 TestMutliControlPlane/serial/StartCluster 211.63
166 TestMutliControlPlane/serial/DeployApp 5.36
167 TestMutliControlPlane/serial/PingHostFromPods 1.52
168 TestMutliControlPlane/serial/AddWorkerNode 45.11
169 TestMutliControlPlane/serial/NodeLabels 0.07
170 TestMutliControlPlane/serial/HAppyAfterClusterStart 0.56
171 TestMutliControlPlane/serial/CopyFile 14.12
173 TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop 3.5
175 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart 0.41
178 TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete 0.41
180 TestMutliControlPlane/serial/RestartCluster 291.46
181 TestMutliControlPlane/serial/DegradedAfterClusterRestart 0.39
182 TestMutliControlPlane/serial/AddSecondaryNode 75.82
183 TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd 0.56
187 TestJSONOutput/start/Command 99.12
188 TestJSONOutput/start/Audit 0
190 TestJSONOutput/start/parallel/DistinctCurrentSteps 0
191 TestJSONOutput/start/parallel/IncreasingCurrentSteps 0
193 TestJSONOutput/pause/Command 0.76
194 TestJSONOutput/pause/Audit 0
196 TestJSONOutput/pause/parallel/DistinctCurrentSteps 0
197 TestJSONOutput/pause/parallel/IncreasingCurrentSteps 0
199 TestJSONOutput/unpause/Command 0.66
200 TestJSONOutput/unpause/Audit 0
202 TestJSONOutput/unpause/parallel/DistinctCurrentSteps 0
203 TestJSONOutput/unpause/parallel/IncreasingCurrentSteps 0
205 TestJSONOutput/stop/Command 7.4
206 TestJSONOutput/stop/Audit 0
208 TestJSONOutput/stop/parallel/DistinctCurrentSteps 0
209 TestJSONOutput/stop/parallel/IncreasingCurrentSteps 0
210 TestErrorJSONOutput 0.22
215 TestMainNoArgs 0.06
216 TestMinikubeProfile 97.55
219 TestMountStart/serial/StartWithMountFirst 27.57
220 TestMountStart/serial/VerifyMountFirst 0.4
221 TestMountStart/serial/StartWithMountSecond 28.21
222 TestMountStart/serial/VerifyMountSecond 0.42
223 TestMountStart/serial/DeleteFirst 0.69
224 TestMountStart/serial/VerifyMountPostDelete 0.41
225 TestMountStart/serial/Stop 1.38
226 TestMountStart/serial/RestartStopped 21.87
227 TestMountStart/serial/VerifyMountPostStop 0.42
230 TestMultiNode/serial/FreshStart2Nodes 104.85
231 TestMultiNode/serial/DeployApp2Nodes 4.37
232 TestMultiNode/serial/PingHostFrom2Pods 0.97
233 TestMultiNode/serial/AddNode 39.87
234 TestMultiNode/serial/MultiNodeLabels 0.06
235 TestMultiNode/serial/ProfileList 0.23
236 TestMultiNode/serial/CopyFile 7.88
237 TestMultiNode/serial/StopNode 3.2
238 TestMultiNode/serial/StartAfterStop 29.42
240 TestMultiNode/serial/DeleteNode 2.39
242 TestMultiNode/serial/RestartMultiNode 172.9
243 TestMultiNode/serial/ValidateNameConflict 48.04
250 TestScheduledStopUnix 117.54
254 TestRunningBinaryUpgrade 204.05
258 TestStoppedBinaryUpgrade/Setup 0.48
259 TestStoppedBinaryUpgrade/Upgrade 182.2
268 TestPause/serial/Start 107
269 TestPause/serial/SecondStartNoReconfiguration 34.98
270 TestStoppedBinaryUpgrade/MinikubeLogs 1.1
272 TestNoKubernetes/serial/StartNoK8sWithVersion 0.08
273 TestNoKubernetes/serial/StartWithK8s 49
274 TestPause/serial/Pause 0.83
278 TestPause/serial/VerifyStatus 0.26
279 TestPause/serial/Unpause 0.76
280 TestPause/serial/PauseAgain 1
281 TestPause/serial/DeletePaused 1.08
286 TestNetworkPlugins/group/false 3.48
287 TestPause/serial/VerifyDeletedResources 0.27
291 TestNoKubernetes/serial/StartWithStopK8s 40.61
292 TestNoKubernetes/serial/Start 50.37
293 TestNoKubernetes/serial/VerifyK8sNotRunning 0.21
294 TestNoKubernetes/serial/ProfileList 0.9
295 TestNoKubernetes/serial/Stop 1.49
296 TestNoKubernetes/serial/StartNoArgs 64.98
299 TestNoKubernetes/serial/VerifyK8sNotRunningSecond 0.22
301 TestStartStop/group/no-preload/serial/FirstStart 150.86
303 TestStartStop/group/embed-certs/serial/FirstStart 97.05
305 TestStartStop/group/default-k8s-diff-port/serial/FirstStart 100.51
306 TestStartStop/group/no-preload/serial/DeployApp 7.32
307 TestStartStop/group/no-preload/serial/EnableAddonWhileActive 1.14
309 TestStartStop/group/embed-certs/serial/DeployApp 9.3
310 TestStartStop/group/embed-certs/serial/EnableAddonWhileActive 1.15
312 TestStartStop/group/default-k8s-diff-port/serial/DeployApp 9.3
313 TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive 1.13
318 TestStartStop/group/no-preload/serial/SecondStart 660.56
320 TestStartStop/group/embed-certs/serial/SecondStart 602.56
321 TestStartStop/group/old-k8s-version/serial/Stop 4.31
322 TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop 0.21
325 TestStartStop/group/default-k8s-diff-port/serial/SecondStart 571.52
335 TestStartStop/group/newest-cni/serial/FirstStart 60.29
336 TestNetworkPlugins/group/auto/Start 102.95
337 TestStartStop/group/newest-cni/serial/DeployApp 0
338 TestStartStop/group/newest-cni/serial/EnableAddonWhileActive 1.53
339 TestStartStop/group/newest-cni/serial/Stop 10.63
340 TestStartStop/group/newest-cni/serial/EnableAddonAfterStop 0.24
341 TestStartStop/group/newest-cni/serial/SecondStart 42.49
342 TestNetworkPlugins/group/flannel/Start 88.53
343 TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop 0
344 TestStartStop/group/newest-cni/serial/AddonExistsAfterStop 0
345 TestStartStop/group/newest-cni/serial/VerifyKubernetesImages 0.24
346 TestStartStop/group/newest-cni/serial/Pause 2.69
347 TestNetworkPlugins/group/enable-default-cni/Start 111.05
348 TestNetworkPlugins/group/auto/KubeletFlags 0.23
349 TestNetworkPlugins/group/auto/NetCatPod 12.22
350 TestNetworkPlugins/group/auto/DNS 0.22
351 TestNetworkPlugins/group/auto/Localhost 0.18
352 TestNetworkPlugins/group/auto/HairPin 0.15
353 TestNetworkPlugins/group/bridge/Start 106.38
354 TestNetworkPlugins/group/calico/Start 114.22
355 TestNetworkPlugins/group/flannel/ControllerPod 6.01
356 TestNetworkPlugins/group/flannel/KubeletFlags 0.24
357 TestNetworkPlugins/group/flannel/NetCatPod 12.28
358 TestNetworkPlugins/group/flannel/DNS 0.22
359 TestNetworkPlugins/group/flannel/Localhost 0.16
360 TestNetworkPlugins/group/flannel/HairPin 0.15
361 TestNetworkPlugins/group/kindnet/Start 77.95
362 TestNetworkPlugins/group/enable-default-cni/KubeletFlags 0.25
363 TestNetworkPlugins/group/enable-default-cni/NetCatPod 12.24
364 TestNetworkPlugins/group/enable-default-cni/DNS 0.19
365 TestNetworkPlugins/group/enable-default-cni/Localhost 0.16
366 TestNetworkPlugins/group/enable-default-cni/HairPin 0.15
367 TestNetworkPlugins/group/custom-flannel/Start 86.13
368 TestNetworkPlugins/group/bridge/KubeletFlags 0.3
369 TestNetworkPlugins/group/bridge/NetCatPod 14.31
370 TestNetworkPlugins/group/bridge/DNS 0.25
371 TestNetworkPlugins/group/bridge/Localhost 0.19
372 TestNetworkPlugins/group/bridge/HairPin 0.23
373 TestNetworkPlugins/group/calico/ControllerPod 6.01
374 TestNetworkPlugins/group/calico/KubeletFlags 0.25
375 TestNetworkPlugins/group/calico/NetCatPod 12.32
376 TestNetworkPlugins/group/kindnet/ControllerPod 6.01
377 TestNetworkPlugins/group/calico/DNS 0.17
378 TestNetworkPlugins/group/calico/Localhost 0.15
379 TestNetworkPlugins/group/calico/HairPin 0.14
380 TestNetworkPlugins/group/kindnet/KubeletFlags 0.24
381 TestNetworkPlugins/group/kindnet/NetCatPod 12.28
382 TestNetworkPlugins/group/kindnet/DNS 0.18
383 TestNetworkPlugins/group/kindnet/Localhost 0.14
384 TestNetworkPlugins/group/kindnet/HairPin 0.16
385 TestNetworkPlugins/group/custom-flannel/KubeletFlags 0.22
386 TestNetworkPlugins/group/custom-flannel/NetCatPod 10.21
387 TestNetworkPlugins/group/custom-flannel/DNS 0.17
388 TestNetworkPlugins/group/custom-flannel/Localhost 0.16
389 TestNetworkPlugins/group/custom-flannel/HairPin 0.14
x
+
TestDownloadOnly/v1.20.0/json-events (7.95s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-516622 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-516622 --force --alsologtostderr --kubernetes-version=v1.20.0 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (7.953988415s)
--- PASS: TestDownloadOnly/v1.20.0/json-events (7.95s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/preload-exists
--- PASS: TestDownloadOnly/v1.20.0/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-516622
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-516622: exit status 85 (79.116013ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      | End Time |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	| start   | -o=json --download-only        | download-only-516622 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC |          |
	|         | -p download-only-516622        |                      |         |         |                     |          |
	|         | --force --alsologtostderr      |                      |         |         |                     |          |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|         | --driver=kvm2                  |                      |         |         |                     |          |
	|         | --container-runtime=crio       |                      |         |         |                     |          |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|----------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:04:33
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:04:33.622633  951323 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:04:33.622898  951323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:04:33.622911  951323 out.go:304] Setting ErrFile to fd 2...
	I0314 18:04:33.622916  951323 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:04:33.623132  951323 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	W0314 18:04:33.623288  951323 root.go:314] Error reading config file at /home/jenkins/minikube-integration/18384-942544/.minikube/config/config.json: open /home/jenkins/minikube-integration/18384-942544/.minikube/config/config.json: no such file or directory
	I0314 18:04:33.623971  951323 out.go:298] Setting JSON to true
	I0314 18:04:33.625006  951323 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":92826,"bootTime":1710346648,"procs":178,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:04:33.625075  951323 start.go:139] virtualization: kvm guest
	I0314 18:04:33.627431  951323 out.go:97] [download-only-516622] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	W0314 18:04:33.627564  951323 preload.go:294] Failed to list preload files: open /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball: no such file or directory
	I0314 18:04:33.627614  951323 notify.go:220] Checking for updates...
	I0314 18:04:33.628992  951323 out.go:169] MINIKUBE_LOCATION=18384
	I0314 18:04:33.630505  951323 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:04:33.631984  951323 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:04:33.633386  951323 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:04:33.634991  951323 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0314 18:04:33.637893  951323 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 18:04:33.638123  951323 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:04:33.672853  951323 out.go:97] Using the kvm2 driver based on user configuration
	I0314 18:04:33.672886  951323 start.go:297] selected driver: kvm2
	I0314 18:04:33.672895  951323 start.go:901] validating driver "kvm2" against <nil>
	I0314 18:04:33.673338  951323 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:04:33.673434  951323 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 18:04:33.688669  951323 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 18:04:33.688741  951323 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 18:04:33.689214  951323 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0314 18:04:33.689378  951323 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 18:04:33.689415  951323 cni.go:84] Creating CNI manager for ""
	I0314 18:04:33.689428  951323 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 18:04:33.689439  951323 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 18:04:33.689506  951323 start.go:340] cluster config:
	{Name:download-only-516622 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.20.0 ClusterName:download-only-516622 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local Cont
ainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.20.0 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:04:33.689685  951323 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:04:33.691534  951323 out.go:97] Downloading VM boot image ...
	I0314 18:04:33.691574  951323 download.go:107] Downloading: https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso?checksum=file:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso.sha256 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/iso/amd64/minikube-v1.32.1-1710348681-18375-amd64.iso
	I0314 18:04:36.426466  951323 out.go:97] Starting "download-only-516622" primary control-plane node in "download-only-516622" cluster
	I0314 18:04:36.426507  951323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 18:04:36.443796  951323 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	I0314 18:04:36.443830  951323 cache.go:56] Caching tarball of preloaded images
	I0314 18:04:36.444005  951323 preload.go:132] Checking if preload exists for k8s version v1.20.0 and runtime crio
	I0314 18:04:36.446051  951323 out.go:97] Downloading Kubernetes v1.20.0 preload ...
	I0314 18:04:36.446073  951323 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4 ...
	I0314 18:04:36.470238  951323 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.20.0/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4?checksum=md5:f93b07cde9c3289306cbaeb7a1803c19 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.20.0-cri-o-overlay-amd64.tar.lz4
	
	
	* The control-plane node download-only-516622 host does not exist
	  To start a cluster, run: "minikube start -p download-only-516622"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.20.0/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.20.0/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-516622
--- PASS: TestDownloadOnly/v1.20.0/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/json-events (5.01s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-944405 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-944405 --force --alsologtostderr --kubernetes-version=v1.28.4 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.009083991s)
--- PASS: TestDownloadOnly/v1.28.4/json-events (5.01s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/preload-exists
--- PASS: TestDownloadOnly/v1.28.4/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-944405
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-944405: exit status 85 (76.882055ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |              Args              |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only        | download-only-516622 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC |                     |
	|         | -p download-only-516622        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	| delete  | --all                          | minikube             | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| delete  | -p download-only-516622        | download-only-516622 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| start   | -o=json --download-only        | download-only-944405 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC |                     |
	|         | -p download-only-944405        |                      |         |         |                     |                     |
	|         | --force --alsologtostderr      |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4   |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|         | --driver=kvm2                  |                      |         |         |                     |                     |
	|         | --container-runtime=crio       |                      |         |         |                     |                     |
	|---------|--------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:04:41
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:04:41.943090  951486 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:04:41.943378  951486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:04:41.943389  951486 out.go:304] Setting ErrFile to fd 2...
	I0314 18:04:41.943395  951486 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:04:41.943633  951486 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:04:41.944300  951486 out.go:298] Setting JSON to true
	I0314 18:04:41.945172  951486 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":92834,"bootTime":1710346648,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:04:41.945249  951486 start.go:139] virtualization: kvm guest
	I0314 18:04:41.947609  951486 out.go:97] [download-only-944405] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:04:41.949207  951486 out.go:169] MINIKUBE_LOCATION=18384
	I0314 18:04:41.947772  951486 notify.go:220] Checking for updates...
	I0314 18:04:41.952313  951486 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:04:41.953936  951486 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:04:41.955358  951486 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:04:41.956680  951486 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	
	
	* The control-plane node download-only-944405 host does not exist
	  To start a cluster, run: "minikube start -p download-only-944405"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.28.4/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.28.4/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-944405
--- PASS: TestDownloadOnly/v1.28.4/DeleteAlwaysSucceeds (0.14s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/json-events (5.63s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/json-events
aaa_download_only_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -o=json --download-only -p download-only-090466 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio
aaa_download_only_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -o=json --download-only -p download-only-090466 --force --alsologtostderr --kubernetes-version=v1.29.0-rc.2 --container-runtime=crio --driver=kvm2  --container-runtime=crio: (5.630805931s)
--- PASS: TestDownloadOnly/v1.29.0-rc.2/json-events (5.63s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/preload-exists (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/preload-exists
--- PASS: TestDownloadOnly/v1.29.0-rc.2/preload-exists (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/LogsDuration
aaa_download_only_test.go:184: (dbg) Run:  out/minikube-linux-amd64 logs -p download-only-090466
aaa_download_only_test.go:184: (dbg) Non-zero exit: out/minikube-linux-amd64 logs -p download-only-090466: exit status 85 (75.905409ms)

                                                
                                                
-- stdout --
	
	==> Audit <==
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| Command |               Args                |       Profile        |  User   | Version |     Start Time      |      End Time       |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	| start   | -o=json --download-only           | download-only-516622 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC |                     |
	|         | -p download-only-516622           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.20.0      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| delete  | -p download-only-516622           | download-only-516622 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| start   | -o=json --download-only           | download-only-944405 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC |                     |
	|         | -p download-only-944405           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.28.4      |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	| delete  | --all                             | minikube             | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| delete  | -p download-only-944405           | download-only-944405 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC | 14 Mar 24 18:04 UTC |
	| start   | -o=json --download-only           | download-only-090466 | jenkins | v1.32.0 | 14 Mar 24 18:04 UTC |                     |
	|         | -p download-only-090466           |                      |         |         |                     |                     |
	|         | --force --alsologtostderr         |                      |         |         |                     |                     |
	|         | --kubernetes-version=v1.29.0-rc.2 |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|         | --driver=kvm2                     |                      |         |         |                     |                     |
	|         | --container-runtime=crio          |                      |         |         |                     |                     |
	|---------|-----------------------------------|----------------------|---------|---------|---------------------|---------------------|
	
	
	==> Last Start <==
	Log file created at: 2024/03/14 18:04:47
	Running on machine: ubuntu-20-agent-13
	Binary: Built with gc go1.22.1 for linux/amd64
	Log line format: [IWEF]mmdd hh:mm:ss.uuuuuu threadid file:line] msg
	I0314 18:04:47.310530  951638 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:04:47.310658  951638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:04:47.310668  951638 out.go:304] Setting ErrFile to fd 2...
	I0314 18:04:47.310672  951638 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:04:47.310908  951638 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:04:47.311544  951638 out.go:298] Setting JSON to true
	I0314 18:04:47.312481  951638 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":92839,"bootTime":1710346648,"procs":176,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:04:47.312543  951638 start.go:139] virtualization: kvm guest
	I0314 18:04:47.314867  951638 out.go:97] [download-only-090466] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:04:47.315080  951638 notify.go:220] Checking for updates...
	I0314 18:04:47.316471  951638 out.go:169] MINIKUBE_LOCATION=18384
	I0314 18:04:47.317903  951638 out.go:169] MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:04:47.319435  951638 out.go:169] KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:04:47.320707  951638 out.go:169] MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:04:47.322071  951638 out.go:169] MINIKUBE_BIN=out/minikube-linux-amd64
	W0314 18:04:47.324704  951638 out.go:267] minikube skips various validations when --force is supplied; this may lead to unexpected behavior
	I0314 18:04:47.324911  951638 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:04:47.355539  951638 out.go:97] Using the kvm2 driver based on user configuration
	I0314 18:04:47.355562  951638 start.go:297] selected driver: kvm2
	I0314 18:04:47.355568  951638 start.go:901] validating driver "kvm2" against <nil>
	I0314 18:04:47.355901  951638 install.go:52] acquiring lock: {Name:mk900956b073697a4aa6c80a27c6bb0742a99a53 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:04:47.355979  951638 install.go:117] Validating docker-machine-driver-kvm2, PATH=/home/jenkins/minikube-integration/18384-942544/.minikube/bin:/home/jenkins/workspace/KVM_Linux_crio_integration/out/:/usr/local/bin:/usr/bin:/bin:/usr/local/games:/usr/games:/usr/local/go/bin:/home/jenkins/go/bin:/usr/local/bin/:/usr/local/go/bin/:/home/jenkins/go/bin
	I0314 18:04:47.371363  951638 install.go:137] /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2 version is 1.32.0
	I0314 18:04:47.371408  951638 start_flags.go:310] no existing cluster config was found, will generate one from the flags 
	I0314 18:04:47.371859  951638 start_flags.go:393] Using suggested 6000MB memory alloc based on sys=32089MB, container=0MB
	I0314 18:04:47.372008  951638 start_flags.go:929] Wait components to verify : map[apiserver:true system_pods:true]
	I0314 18:04:47.372036  951638 cni.go:84] Creating CNI manager for ""
	I0314 18:04:47.372042  951638 cni.go:146] "kvm2" driver + "crio" runtime found, recommending bridge
	I0314 18:04:47.372051  951638 start_flags.go:319] Found "bridge CNI" CNI - setting NetworkPlugin=cni
	I0314 18:04:47.372109  951638 start.go:340] cluster config:
	{Name:download-only-090466 KeepContext:false EmbedCerts:false MinikubeISO: KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:6000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8443 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{KubernetesVersion:v1.29.0-rc.2 ClusterName:download-only-090466 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local
ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP: Port:8443 KubernetesVersion:v1.29.0-rc.2 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mount:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:04:47.372203  951638 iso.go:125] acquiring lock: {Name:mk586a3a5cfb4f22aec6aed37f8969c973afde28 Clock:{} Delay:500ms Timeout:10m0s Cancel:<nil>}
	I0314 18:04:47.373743  951638 out.go:97] Starting "download-only-090466" primary control-plane node in "download-only-090466" cluster
	I0314 18:04:47.373759  951638 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 18:04:47.402372  951638 preload.go:119] Found remote preload: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0314 18:04:47.402400  951638 cache.go:56] Caching tarball of preloaded images
	I0314 18:04:47.402526  951638 preload.go:132] Checking if preload exists for k8s version v1.29.0-rc.2 and runtime crio
	I0314 18:04:47.404044  951638 out.go:97] Downloading Kubernetes v1.29.0-rc.2 preload ...
	I0314 18:04:47.404057  951638 preload.go:237] getting checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0314 18:04:47.426525  951638 download.go:107] Downloading: https://storage.googleapis.com/minikube-preloaded-volume-tarballs/v18/v1.29.0-rc.2/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4?checksum=md5:9e0f57288adacc30aad3ff7e72a8dc68 -> /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4
	I0314 18:04:51.555117  951638 preload.go:248] saving checksum for preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	I0314 18:04:51.555216  951638 preload.go:255] verifying checksum of /home/jenkins/minikube-integration/18384-942544/.minikube/cache/preloaded-tarball/preloaded-images-k8s-v18-v1.29.0-rc.2-cri-o-overlay-amd64.tar.lz4 ...
	
	
	* The control-plane node download-only-090466 host does not exist
	  To start a cluster, run: "minikube start -p download-only-090466"

                                                
                                                
-- /stdout --
aaa_download_only_test.go:185: minikube logs failed with error: exit status 85
--- PASS: TestDownloadOnly/v1.29.0-rc.2/LogsDuration (0.08s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAll
aaa_download_only_test.go:197: (dbg) Run:  out/minikube-linux-amd64 delete --all
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAll (0.15s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds
aaa_download_only_test.go:208: (dbg) Run:  out/minikube-linux-amd64 delete -p download-only-090466
--- PASS: TestDownloadOnly/v1.29.0-rc.2/DeleteAlwaysSucceeds (0.13s)

                                                
                                    
x
+
TestBinaryMirror (0.58s)

                                                
                                                
=== RUN   TestBinaryMirror
aaa_download_only_test.go:314: (dbg) Run:  out/minikube-linux-amd64 start --download-only -p binary-mirror-359112 --alsologtostderr --binary-mirror http://127.0.0.1:42713 --driver=kvm2  --container-runtime=crio
helpers_test.go:175: Cleaning up "binary-mirror-359112" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p binary-mirror-359112
--- PASS: TestBinaryMirror (0.58s)

                                                
                                    
x
+
TestOffline (60.17s)

                                                
                                                
=== RUN   TestOffline
=== PAUSE TestOffline

                                                
                                                

                                                
                                                
=== CONT  TestOffline
aab_offline_test.go:55: (dbg) Run:  out/minikube-linux-amd64 start -p offline-crio-972570 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio
aab_offline_test.go:55: (dbg) Done: out/minikube-linux-amd64 start -p offline-crio-972570 --alsologtostderr -v=1 --memory=2048 --wait=true --driver=kvm2  --container-runtime=crio: (58.641826554s)
helpers_test.go:175: Cleaning up "offline-crio-972570" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p offline-crio-972570
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p offline-crio-972570: (1.528078657s)
--- PASS: TestOffline (60.17s)

                                                
                                    
x
+
TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/EnablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/EnablingAddonOnNonExistingCluster
addons_test.go:928: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p addons-677681
addons_test.go:928: (dbg) Non-zero exit: out/minikube-linux-amd64 addons enable dashboard -p addons-677681: exit status 85 (71.721867ms)

                                                
                                                
-- stdout --
	* Profile "addons-677681" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-677681"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/EnablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                                
=== RUN   TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
=== PAUSE TestAddons/PreSetup/DisablingAddonOnNonExistingCluster

                                                
                                                

                                                
                                                
=== CONT  TestAddons/PreSetup/DisablingAddonOnNonExistingCluster
addons_test.go:939: (dbg) Run:  out/minikube-linux-amd64 addons disable dashboard -p addons-677681
addons_test.go:939: (dbg) Non-zero exit: out/minikube-linux-amd64 addons disable dashboard -p addons-677681: exit status 85 (71.60437ms)

                                                
                                                
-- stdout --
	* Profile "addons-677681" not found. Run "minikube profile list" to view all profiles.
	  To start a cluster, run: "minikube start -p addons-677681"

                                                
                                                
-- /stdout --
--- PASS: TestAddons/PreSetup/DisablingAddonOnNonExistingCluster (0.07s)

                                                
                                    
x
+
TestAddons/Setup (140.54s)

                                                
                                                
=== RUN   TestAddons/Setup
addons_test.go:109: (dbg) Run:  out/minikube-linux-amd64 start -p addons-677681 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller
addons_test.go:109: (dbg) Done: out/minikube-linux-amd64 start -p addons-677681 --wait=true --memory=4000 --alsologtostderr --addons=registry --addons=metrics-server --addons=volumesnapshots --addons=csi-hostpath-driver --addons=gcp-auth --addons=cloud-spanner --addons=inspektor-gadget --addons=storage-provisioner-rancher --addons=nvidia-device-plugin --addons=yakd --driver=kvm2  --container-runtime=crio --addons=ingress --addons=ingress-dns --addons=helm-tiller: (2m20.537366746s)
--- PASS: TestAddons/Setup (140.54s)

                                                
                                    
x
+
TestAddons/parallel/Registry (17.93s)

                                                
                                                
=== RUN   TestAddons/parallel/Registry
=== PAUSE TestAddons/parallel/Registry

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Registry
addons_test.go:330: registry stabilized in 34.481059ms
addons_test.go:332: (dbg) TestAddons/parallel/Registry: waiting 6m0s for pods matching "actual-registry=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-pwdvn" [492bfa41-6a10-4828-9e01-3744a4cb381a] Running
addons_test.go:332: (dbg) TestAddons/parallel/Registry: actual-registry=true healthy within 5.008960059s
addons_test.go:335: (dbg) TestAddons/parallel/Registry: waiting 10m0s for pods matching "registry-proxy=true" in namespace "kube-system" ...
helpers_test.go:344: "registry-proxy-l87zl" [5e0174a6-c5fb-448a-9a4a-caf4fa57b737] Running
addons_test.go:335: (dbg) TestAddons/parallel/Registry: registry-proxy=true healthy within 6.006410466s
addons_test.go:340: (dbg) Run:  kubectl --context addons-677681 delete po -l run=registry-test --now
addons_test.go:345: (dbg) Run:  kubectl --context addons-677681 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local"
addons_test.go:345: (dbg) Done: kubectl --context addons-677681 run --rm registry-test --restart=Never --image=gcr.io/k8s-minikube/busybox -it -- sh -c "wget --spider -S http://registry.kube-system.svc.cluster.local": (5.921883043s)
addons_test.go:359: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 ip
2024/03/14 18:07:32 [DEBUG] GET http://192.168.39.215:5000
addons_test.go:388: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 addons disable registry --alsologtostderr -v=1
--- PASS: TestAddons/parallel/Registry (17.93s)

                                                
                                    
x
+
TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                                
=== RUN   TestAddons/parallel/InspektorGadget
=== PAUSE TestAddons/parallel/InspektorGadget

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/InspektorGadget
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: waiting 8m0s for pods matching "k8s-app=gadget" in namespace "gadget" ...
helpers_test.go:344: "gadget-kf86v" [5a50d670-3bec-49a2-878d-08060047fde6] Running / Ready:ContainersNotReady (containers with unready status: [gadget]) / ContainersReady:ContainersNotReady (containers with unready status: [gadget])
addons_test.go:838: (dbg) TestAddons/parallel/InspektorGadget: k8s-app=gadget healthy within 6.005971203s
addons_test.go:841: (dbg) Run:  out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-677681
addons_test.go:841: (dbg) Done: out/minikube-linux-amd64 addons disable inspektor-gadget -p addons-677681: (5.929804627s)
--- PASS: TestAddons/parallel/InspektorGadget (11.94s)

                                                
                                    
x
+
TestAddons/parallel/MetricsServer (6.95s)

                                                
                                                
=== RUN   TestAddons/parallel/MetricsServer
=== PAUSE TestAddons/parallel/MetricsServer

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/MetricsServer
addons_test.go:407: metrics-server stabilized in 34.223945ms
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: waiting 6m0s for pods matching "k8s-app=metrics-server" in namespace "kube-system" ...
helpers_test.go:344: "metrics-server-69cf46c98-7jkq5" [27d0af32-b416-4e1a-bb9c-df89c071e23b] Running
addons_test.go:409: (dbg) TestAddons/parallel/MetricsServer: k8s-app=metrics-server healthy within 6.008787728s
addons_test.go:415: (dbg) Run:  kubectl --context addons-677681 top pods -n kube-system
addons_test.go:432: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 addons disable metrics-server --alsologtostderr -v=1
--- PASS: TestAddons/parallel/MetricsServer (6.95s)

                                                
                                    
x
+
TestAddons/parallel/HelmTiller (9.72s)

                                                
                                                
=== RUN   TestAddons/parallel/HelmTiller
=== PAUSE TestAddons/parallel/HelmTiller

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/HelmTiller
addons_test.go:456: tiller-deploy stabilized in 4.231374ms
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: waiting 6m0s for pods matching "app=helm" in namespace "kube-system" ...
helpers_test.go:344: "tiller-deploy-7b677967b9-pghvn" [3d96f26e-707b-4ed4-8262-d94ea4378716] Running
addons_test.go:458: (dbg) TestAddons/parallel/HelmTiller: app=helm healthy within 5.005641729s
addons_test.go:473: (dbg) Run:  kubectl --context addons-677681 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version
addons_test.go:473: (dbg) Done: kubectl --context addons-677681 run --rm helm-test --restart=Never --image=docker.io/alpine/helm:2.16.3 -it --namespace=kube-system -- version: (4.069098583s)
addons_test.go:490: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 addons disable helm-tiller --alsologtostderr -v=1
--- PASS: TestAddons/parallel/HelmTiller (9.72s)

                                                
                                    
x
+
TestAddons/parallel/CSI (78.63s)

                                                
                                                
=== RUN   TestAddons/parallel/CSI
=== PAUSE TestAddons/parallel/CSI

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CSI
addons_test.go:561: csi-hostpath-driver pods stabilized in 36.493772ms
addons_test.go:564: (dbg) Run:  kubectl --context addons-677681 create -f testdata/csi-hostpath-driver/pvc.yaml
addons_test.go:569: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc -o jsonpath={.status.phase} -n default
addons_test.go:574: (dbg) Run:  kubectl --context addons-677681 create -f testdata/csi-hostpath-driver/pv-pod.yaml
addons_test.go:579: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod" in namespace "default" ...
helpers_test.go:344: "task-pv-pod" [055c6988-2131-4a7a-aea5-5d1ec31b596b] Pending
helpers_test.go:344: "task-pv-pod" [055c6988-2131-4a7a-aea5-5d1ec31b596b] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod" [055c6988-2131-4a7a-aea5-5d1ec31b596b] Running
addons_test.go:579: (dbg) TestAddons/parallel/CSI: app=task-pv-pod healthy within 19.005126245s
addons_test.go:584: (dbg) Run:  kubectl --context addons-677681 create -f testdata/csi-hostpath-driver/snapshot.yaml
addons_test.go:589: (dbg) TestAddons/parallel/CSI: waiting 6m0s for volume snapshot "new-snapshot-demo" in namespace "default" ...
helpers_test.go:419: (dbg) Run:  kubectl --context addons-677681 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
helpers_test.go:427: TestAddons/parallel/CSI: WARNING: volume snapshot get for "default" "new-snapshot-demo" returned: 
helpers_test.go:419: (dbg) Run:  kubectl --context addons-677681 get volumesnapshot new-snapshot-demo -o jsonpath={.status.readyToUse} -n default
addons_test.go:594: (dbg) Run:  kubectl --context addons-677681 delete pod task-pv-pod
addons_test.go:600: (dbg) Run:  kubectl --context addons-677681 delete pvc hpvc
addons_test.go:606: (dbg) Run:  kubectl --context addons-677681 create -f testdata/csi-hostpath-driver/pvc-restore.yaml
addons_test.go:611: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pvc "hpvc-restore" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc hpvc-restore -o jsonpath={.status.phase} -n default
addons_test.go:616: (dbg) Run:  kubectl --context addons-677681 create -f testdata/csi-hostpath-driver/pv-pod-restore.yaml
addons_test.go:621: (dbg) TestAddons/parallel/CSI: waiting 6m0s for pods matching "app=task-pv-pod-restore" in namespace "default" ...
helpers_test.go:344: "task-pv-pod-restore" [74d8e461-bbe7-4442-9936-1913839aa73c] Pending
helpers_test.go:344: "task-pv-pod-restore" [74d8e461-bbe7-4442-9936-1913839aa73c] Pending / Ready:ContainersNotReady (containers with unready status: [task-pv-container]) / ContainersReady:ContainersNotReady (containers with unready status: [task-pv-container])
helpers_test.go:344: "task-pv-pod-restore" [74d8e461-bbe7-4442-9936-1913839aa73c] Running
addons_test.go:621: (dbg) TestAddons/parallel/CSI: app=task-pv-pod-restore healthy within 7.004550357s
addons_test.go:626: (dbg) Run:  kubectl --context addons-677681 delete pod task-pv-pod-restore
addons_test.go:630: (dbg) Run:  kubectl --context addons-677681 delete pvc hpvc-restore
addons_test.go:634: (dbg) Run:  kubectl --context addons-677681 delete volumesnapshot new-snapshot-demo
addons_test.go:638: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 addons disable csi-hostpath-driver --alsologtostderr -v=1
addons_test.go:638: (dbg) Done: out/minikube-linux-amd64 -p addons-677681 addons disable csi-hostpath-driver --alsologtostderr -v=1: (6.842805216s)
addons_test.go:642: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 addons disable volumesnapshots --alsologtostderr -v=1
--- PASS: TestAddons/parallel/CSI (78.63s)

                                                
                                    
x
+
TestAddons/parallel/Headlamp (15.84s)

                                                
                                                
=== RUN   TestAddons/parallel/Headlamp
=== PAUSE TestAddons/parallel/Headlamp

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Headlamp
addons_test.go:824: (dbg) Run:  out/minikube-linux-amd64 addons enable headlamp -p addons-677681 --alsologtostderr -v=1
addons_test.go:824: (dbg) Done: out/minikube-linux-amd64 addons enable headlamp -p addons-677681 --alsologtostderr -v=1: (1.835031357s)
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: waiting 8m0s for pods matching "app.kubernetes.io/name=headlamp" in namespace "headlamp" ...
helpers_test.go:344: "headlamp-5485c556b-lnjcc" [e2aaa66f-7ab3-4b99-9e5f-2b0278353e6c] Pending / Ready:ContainersNotReady (containers with unready status: [headlamp]) / ContainersReady:ContainersNotReady (containers with unready status: [headlamp])
helpers_test.go:344: "headlamp-5485c556b-lnjcc" [e2aaa66f-7ab3-4b99-9e5f-2b0278353e6c] Running
addons_test.go:829: (dbg) TestAddons/parallel/Headlamp: app.kubernetes.io/name=headlamp healthy within 14.004539835s
--- PASS: TestAddons/parallel/Headlamp (15.84s)

                                                
                                    
x
+
TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                                
=== RUN   TestAddons/parallel/CloudSpanner
=== PAUSE TestAddons/parallel/CloudSpanner

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/CloudSpanner
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: waiting 6m0s for pods matching "app=cloud-spanner-emulator" in namespace "default" ...
helpers_test.go:344: "cloud-spanner-emulator-6548d5df46-7ptzw" [f2845740-760b-484e-bc9b-8779fd80bb9a] Running
addons_test.go:857: (dbg) TestAddons/parallel/CloudSpanner: app=cloud-spanner-emulator healthy within 6.003670342s
addons_test.go:860: (dbg) Run:  out/minikube-linux-amd64 addons disable cloud-spanner -p addons-677681
--- PASS: TestAddons/parallel/CloudSpanner (6.65s)

                                                
                                    
x
+
TestAddons/parallel/LocalPath (54.65s)

                                                
                                                
=== RUN   TestAddons/parallel/LocalPath
=== PAUSE TestAddons/parallel/LocalPath

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/LocalPath
addons_test.go:873: (dbg) Run:  kubectl --context addons-677681 apply -f testdata/storage-provisioner-rancher/pvc.yaml
addons_test.go:879: (dbg) Run:  kubectl --context addons-677681 apply -f testdata/storage-provisioner-rancher/pod.yaml
addons_test.go:883: (dbg) TestAddons/parallel/LocalPath: waiting 5m0s for pvc "test-pvc" in namespace "default" ...
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc test-pvc -o jsonpath={.status.phase} -n default
helpers_test.go:394: (dbg) Run:  kubectl --context addons-677681 get pvc test-pvc -o jsonpath={.status.phase} -n default
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: waiting 3m0s for pods matching "run=test-local-path" in namespace "default" ...
helpers_test.go:344: "test-local-path" [95fdd8fd-3570-43e8-bfbe-21ad792ba4e6] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "test-local-path" [95fdd8fd-3570-43e8-bfbe-21ad792ba4e6] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "test-local-path" [95fdd8fd-3570-43e8-bfbe-21ad792ba4e6] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
addons_test.go:886: (dbg) TestAddons/parallel/LocalPath: run=test-local-path healthy within 5.003783297s
addons_test.go:891: (dbg) Run:  kubectl --context addons-677681 get pvc test-pvc -o=json
addons_test.go:900: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 ssh "cat /opt/local-path-provisioner/pvc-3ea07a54-31e2-48a3-89b2-871a7a1d26bf_default_test-pvc/file1"
addons_test.go:912: (dbg) Run:  kubectl --context addons-677681 delete pod test-local-path
addons_test.go:916: (dbg) Run:  kubectl --context addons-677681 delete pvc test-pvc
addons_test.go:920: (dbg) Run:  out/minikube-linux-amd64 -p addons-677681 addons disable storage-provisioner-rancher --alsologtostderr -v=1
addons_test.go:920: (dbg) Done: out/minikube-linux-amd64 -p addons-677681 addons disable storage-provisioner-rancher --alsologtostderr -v=1: (43.795119366s)
--- PASS: TestAddons/parallel/LocalPath (54.65s)

                                                
                                    
x
+
TestAddons/parallel/NvidiaDevicePlugin (6.7s)

                                                
                                                
=== RUN   TestAddons/parallel/NvidiaDevicePlugin
=== PAUSE TestAddons/parallel/NvidiaDevicePlugin

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/NvidiaDevicePlugin
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: waiting 6m0s for pods matching "name=nvidia-device-plugin-ds" in namespace "kube-system" ...
helpers_test.go:344: "nvidia-device-plugin-daemonset-22t8w" [c334025a-fbc2-4b2c-b4f0-421a2b1481ac] Running
addons_test.go:952: (dbg) TestAddons/parallel/NvidiaDevicePlugin: name=nvidia-device-plugin-ds healthy within 6.007605858s
addons_test.go:955: (dbg) Run:  out/minikube-linux-amd64 addons disable nvidia-device-plugin -p addons-677681
--- PASS: TestAddons/parallel/NvidiaDevicePlugin (6.70s)

                                                
                                    
x
+
TestAddons/parallel/Yakd (6.01s)

                                                
                                                
=== RUN   TestAddons/parallel/Yakd
=== PAUSE TestAddons/parallel/Yakd

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Yakd
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: waiting 2m0s for pods matching "app.kubernetes.io/name=yakd-dashboard" in namespace "yakd-dashboard" ...
helpers_test.go:344: "yakd-dashboard-9947fc6bf-xg4nt" [0f6ed210-68d9-4437-b638-6c36d1c56f27] Running
addons_test.go:963: (dbg) TestAddons/parallel/Yakd: app.kubernetes.io/name=yakd-dashboard healthy within 6.004193143s
--- PASS: TestAddons/parallel/Yakd (6.01s)

                                                
                                    
x
+
TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                                
=== RUN   TestAddons/serial/GCPAuth/Namespaces
addons_test.go:650: (dbg) Run:  kubectl --context addons-677681 create ns new-namespace
addons_test.go:664: (dbg) Run:  kubectl --context addons-677681 get secret gcp-auth -n new-namespace
--- PASS: TestAddons/serial/GCPAuth/Namespaces (0.12s)

                                                
                                    
x
+
TestCertOptions (84.76s)

                                                
                                                
=== RUN   TestCertOptions
=== PAUSE TestCertOptions

                                                
                                                

                                                
                                                
=== CONT  TestCertOptions
cert_options_test.go:49: (dbg) Run:  out/minikube-linux-amd64 start -p cert-options-840108 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio
cert_options_test.go:49: (dbg) Done: out/minikube-linux-amd64 start -p cert-options-840108 --memory=2048 --apiserver-ips=127.0.0.1 --apiserver-ips=192.168.15.15 --apiserver-names=localhost --apiserver-names=www.google.com --apiserver-port=8555 --driver=kvm2  --container-runtime=crio: (1m23.440215497s)
cert_options_test.go:60: (dbg) Run:  out/minikube-linux-amd64 -p cert-options-840108 ssh "openssl x509 -text -noout -in /var/lib/minikube/certs/apiserver.crt"
cert_options_test.go:88: (dbg) Run:  kubectl --context cert-options-840108 config view
cert_options_test.go:100: (dbg) Run:  out/minikube-linux-amd64 ssh -p cert-options-840108 -- "sudo cat /etc/kubernetes/admin.conf"
helpers_test.go:175: Cleaning up "cert-options-840108" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-options-840108
--- PASS: TestCertOptions (84.76s)

                                                
                                    
x
+
TestCertExpiration (276.85s)

                                                
                                                
=== RUN   TestCertExpiration
=== PAUSE TestCertExpiration

                                                
                                                

                                                
                                                
=== CONT  TestCertExpiration
cert_options_test.go:123: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-525214 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio
E0314 19:12:14.528712  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 19:12:14.854104  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
cert_options_test.go:123: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-525214 --memory=2048 --cert-expiration=3m --driver=kvm2  --container-runtime=crio: (55.41928702s)
cert_options_test.go:131: (dbg) Run:  out/minikube-linux-amd64 start -p cert-expiration-525214 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio
cert_options_test.go:131: (dbg) Done: out/minikube-linux-amd64 start -p cert-expiration-525214 --memory=2048 --cert-expiration=8760h --driver=kvm2  --container-runtime=crio: (40.609840366s)
helpers_test.go:175: Cleaning up "cert-expiration-525214" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cert-expiration-525214
--- PASS: TestCertExpiration (276.85s)

                                                
                                    
x
+
TestForceSystemdFlag (54.69s)

                                                
                                                
=== RUN   TestForceSystemdFlag
=== PAUSE TestForceSystemdFlag

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdFlag
docker_test.go:91: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-flag-234927 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:91: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-flag-234927 --memory=2048 --force-systemd --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (53.371506346s)
docker_test.go:132: (dbg) Run:  out/minikube-linux-amd64 -p force-systemd-flag-234927 ssh "cat /etc/crio/crio.conf.d/02-crio.conf"
helpers_test.go:175: Cleaning up "force-systemd-flag-234927" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-flag-234927
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-flag-234927: (1.081654936s)
--- PASS: TestForceSystemdFlag (54.69s)

                                                
                                    
x
+
TestForceSystemdEnv (70.13s)

                                                
                                                
=== RUN   TestForceSystemdEnv
=== PAUSE TestForceSystemdEnv

                                                
                                                

                                                
                                                
=== CONT  TestForceSystemdEnv
docker_test.go:155: (dbg) Run:  out/minikube-linux-amd64 start -p force-systemd-env-748636 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio
docker_test.go:155: (dbg) Done: out/minikube-linux-amd64 start -p force-systemd-env-748636 --memory=2048 --alsologtostderr -v=5 --driver=kvm2  --container-runtime=crio: (1m9.095640454s)
helpers_test.go:175: Cleaning up "force-systemd-env-748636" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p force-systemd-env-748636
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p force-systemd-env-748636: (1.031054909s)
--- PASS: TestForceSystemdEnv (70.13s)

                                                
                                    
x
+
TestKVMDriverInstallOrUpdate (1.77s)

                                                
                                                
=== RUN   TestKVMDriverInstallOrUpdate
=== PAUSE TestKVMDriverInstallOrUpdate

                                                
                                                

                                                
                                                
=== CONT  TestKVMDriverInstallOrUpdate
--- PASS: TestKVMDriverInstallOrUpdate (1.77s)

                                                
                                    
x
+
TestErrorSpam/setup (48.29s)

                                                
                                                
=== RUN   TestErrorSpam/setup
error_spam_test.go:81: (dbg) Run:  out/minikube-linux-amd64 start -p nospam-626530 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-626530 --driver=kvm2  --container-runtime=crio
error_spam_test.go:81: (dbg) Done: out/minikube-linux-amd64 start -p nospam-626530 -n=1 --memory=2250 --wait=false --log_dir=/tmp/nospam-626530 --driver=kvm2  --container-runtime=crio: (48.28821086s)
--- PASS: TestErrorSpam/setup (48.29s)

                                                
                                    
x
+
TestErrorSpam/start (0.39s)

                                                
                                                
=== RUN   TestErrorSpam/start
error_spam_test.go:216: Cleaning up 1 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 start --dry-run
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 start --dry-run
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 start --dry-run
--- PASS: TestErrorSpam/start (0.39s)

                                                
                                    
x
+
TestErrorSpam/status (0.79s)

                                                
                                                
=== RUN   TestErrorSpam/status
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 status
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 status
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 status
--- PASS: TestErrorSpam/status (0.79s)

                                                
                                    
x
+
TestErrorSpam/pause (1.66s)

                                                
                                                
=== RUN   TestErrorSpam/pause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 pause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 pause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 pause
--- PASS: TestErrorSpam/pause (1.66s)

                                                
                                    
x
+
TestErrorSpam/unpause (1.73s)

                                                
                                                
=== RUN   TestErrorSpam/unpause
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 unpause
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 unpause
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 unpause
--- PASS: TestErrorSpam/unpause (1.73s)

                                                
                                    
x
+
TestErrorSpam/stop (5.85s)

                                                
                                                
=== RUN   TestErrorSpam/stop
error_spam_test.go:216: Cleaning up 0 logfile(s) ...
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 stop: (2.296328345s)
error_spam_test.go:159: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 stop
error_spam_test.go:159: (dbg) Done: out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 stop: (1.486729s)
error_spam_test.go:182: (dbg) Run:  out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 stop
error_spam_test.go:182: (dbg) Done: out/minikube-linux-amd64 -p nospam-626530 --log_dir /tmp/nospam-626530 stop: (2.068298462s)
--- PASS: TestErrorSpam/stop (5.85s)

                                                
                                    
x
+
TestFunctional/serial/CopySyncFile (0s)

                                                
                                                
=== RUN   TestFunctional/serial/CopySyncFile
functional_test.go:1851: local sync path: /home/jenkins/minikube-integration/18384-942544/.minikube/files/etc/test/nested/copy/951311/hosts
--- PASS: TestFunctional/serial/CopySyncFile (0.00s)

                                                
                                    
x
+
TestFunctional/serial/StartWithProxy (89.22s)

                                                
                                                
=== RUN   TestFunctional/serial/StartWithProxy
functional_test.go:2230: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059245 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio
functional_test.go:2230: (dbg) Done: out/minikube-linux-amd64 start -p functional-059245 --memory=4000 --apiserver-port=8441 --wait=all --driver=kvm2  --container-runtime=crio: (1m29.222807544s)
--- PASS: TestFunctional/serial/StartWithProxy (89.22s)

                                                
                                    
x
+
TestFunctional/serial/AuditLog (0s)

                                                
                                                
=== RUN   TestFunctional/serial/AuditLog
--- PASS: TestFunctional/serial/AuditLog (0.00s)

                                                
                                    
x
+
TestFunctional/serial/SoftStart (46.35s)

                                                
                                                
=== RUN   TestFunctional/serial/SoftStart
functional_test.go:655: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059245 --alsologtostderr -v=8
functional_test.go:655: (dbg) Done: out/minikube-linux-amd64 start -p functional-059245 --alsologtostderr -v=8: (46.346891031s)
functional_test.go:659: soft start took 46.347737771s for "functional-059245" cluster.
--- PASS: TestFunctional/serial/SoftStart (46.35s)

                                                
                                    
x
+
TestFunctional/serial/KubeContext (0.04s)

                                                
                                                
=== RUN   TestFunctional/serial/KubeContext
functional_test.go:677: (dbg) Run:  kubectl config current-context
--- PASS: TestFunctional/serial/KubeContext (0.04s)

                                                
                                    
x
+
TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/KubectlGetPods
functional_test.go:692: (dbg) Run:  kubectl --context functional-059245 get po -A
--- PASS: TestFunctional/serial/KubectlGetPods (0.07s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_remote (3.31s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_remote
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 cache add registry.k8s.io/pause:3.1
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 cache add registry.k8s.io/pause:3.1: (1.057491005s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 cache add registry.k8s.io/pause:3.3
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 cache add registry.k8s.io/pause:3.3: (1.136388956s)
functional_test.go:1045: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 cache add registry.k8s.io/pause:latest
functional_test.go:1045: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 cache add registry.k8s.io/pause:latest: (1.118077785s)
--- PASS: TestFunctional/serial/CacheCmd/cache/add_remote (3.31s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/add_local (1.1s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/add_local
functional_test.go:1073: (dbg) Run:  docker build -t minikube-local-cache-test:functional-059245 /tmp/TestFunctionalserialCacheCmdcacheadd_local1185828210/001
functional_test.go:1085: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 cache add minikube-local-cache-test:functional-059245
functional_test.go:1090: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 cache delete minikube-local-cache-test:functional-059245
functional_test.go:1079: (dbg) Run:  docker rmi minikube-local-cache-test:functional-059245
--- PASS: TestFunctional/serial/CacheCmd/cache/add_local (1.10s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/CacheDelete
functional_test.go:1098: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.3
--- PASS: TestFunctional/serial/CacheCmd/cache/CacheDelete (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/list
functional_test.go:1106: (dbg) Run:  out/minikube-linux-amd64 cache list
--- PASS: TestFunctional/serial/CacheCmd/cache/list (0.06s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node
functional_test.go:1120: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh sudo crictl images
--- PASS: TestFunctional/serial/CacheCmd/cache/verify_cache_inside_node (0.24s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/cache_reload
functional_test.go:1143: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh sudo crictl rmi registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh sudo crictl inspecti registry.k8s.io/pause:latest
functional_test.go:1149: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059245 ssh sudo crictl inspecti registry.k8s.io/pause:latest: exit status 1 (231.227102ms)

                                                
                                                
-- stdout --
	FATA[0000] no such image "registry.k8s.io/pause:latest" present 

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:1154: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 cache reload
functional_test.go:1159: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh sudo crictl inspecti registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/cache_reload (1.72s)

                                                
                                    
x
+
TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/CacheCmd/cache/delete
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:3.1
functional_test.go:1168: (dbg) Run:  out/minikube-linux-amd64 cache delete registry.k8s.io/pause:latest
--- PASS: TestFunctional/serial/CacheCmd/cache/delete (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmd
functional_test.go:712: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 kubectl -- --context functional-059245 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmd (0.12s)

                                                
                                    
x
+
TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                                
=== RUN   TestFunctional/serial/MinikubeKubectlCmdDirectly
functional_test.go:737: (dbg) Run:  out/kubectl --context functional-059245 get pods
--- PASS: TestFunctional/serial/MinikubeKubectlCmdDirectly (0.12s)

                                                
                                    
x
+
TestFunctional/serial/ExtraConfig (35.01s)

                                                
                                                
=== RUN   TestFunctional/serial/ExtraConfig
functional_test.go:753: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059245 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all
functional_test.go:753: (dbg) Done: out/minikube-linux-amd64 start -p functional-059245 --extra-config=apiserver.enable-admission-plugins=NamespaceAutoProvision --wait=all: (35.012837645s)
functional_test.go:757: restart took 35.012972541s for "functional-059245" cluster.
--- PASS: TestFunctional/serial/ExtraConfig (35.01s)

                                                
                                    
x
+
TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                                
=== RUN   TestFunctional/serial/ComponentHealth
functional_test.go:806: (dbg) Run:  kubectl --context functional-059245 get po -l tier=control-plane -n kube-system -o=json
functional_test.go:821: etcd phase: Running
functional_test.go:831: etcd status: Ready
functional_test.go:821: kube-apiserver phase: Running
functional_test.go:831: kube-apiserver status: Ready
functional_test.go:821: kube-controller-manager phase: Running
functional_test.go:831: kube-controller-manager status: Ready
functional_test.go:821: kube-scheduler phase: Running
functional_test.go:831: kube-scheduler status: Ready
--- PASS: TestFunctional/serial/ComponentHealth (0.07s)

                                                
                                    
x
+
TestFunctional/serial/LogsCmd (1.53s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsCmd
functional_test.go:1232: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 logs
functional_test.go:1232: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 logs: (1.52819238s)
--- PASS: TestFunctional/serial/LogsCmd (1.53s)

                                                
                                    
x
+
TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                                
=== RUN   TestFunctional/serial/LogsFileCmd
functional_test.go:1246: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 logs --file /tmp/TestFunctionalserialLogsFileCmd981008600/001/logs.txt
functional_test.go:1246: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 logs --file /tmp/TestFunctionalserialLogsFileCmd981008600/001/logs.txt: (1.578260623s)
--- PASS: TestFunctional/serial/LogsFileCmd (1.58s)

                                                
                                    
x
+
TestFunctional/serial/InvalidService (4.17s)

                                                
                                                
=== RUN   TestFunctional/serial/InvalidService
functional_test.go:2317: (dbg) Run:  kubectl --context functional-059245 apply -f testdata/invalidsvc.yaml
functional_test.go:2331: (dbg) Run:  out/minikube-linux-amd64 service invalid-svc -p functional-059245
functional_test.go:2331: (dbg) Non-zero exit: out/minikube-linux-amd64 service invalid-svc -p functional-059245: exit status 115 (306.155352ms)

                                                
                                                
-- stdout --
	|-----------|-------------|-------------|-----------------------------|
	| NAMESPACE |    NAME     | TARGET PORT |             URL             |
	|-----------|-------------|-------------|-----------------------------|
	| default   | invalid-svc |          80 | http://192.168.39.220:32104 |
	|-----------|-------------|-------------|-----------------------------|
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to SVC_UNREACHABLE: service not available: no running pod for service invalid-svc found
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_service_96b204199e3191fa1740d4430b018a3c8028d52d_0.log                 │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
functional_test.go:2323: (dbg) Run:  kubectl --context functional-059245 delete -f testdata/invalidsvc.yaml
--- PASS: TestFunctional/serial/InvalidService (4.17s)

                                                
                                    
x
+
TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                                
=== RUN   TestFunctional/parallel/ConfigCmd
=== PAUSE TestFunctional/parallel/ConfigCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ConfigCmd
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 config get cpus
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059245 config get cpus: exit status 14 (70.894088ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 config set cpus 2
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 config get cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 config unset cpus
functional_test.go:1195: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 config get cpus
E0314 18:17:14.854679  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
E0314 18:17:14.860672  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
functional_test.go:1195: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059245 config get cpus: exit status 14 (64.211746ms)

                                                
                                                
** stderr ** 
	Error: specified key could not be found in config

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/ConfigCmd (0.42s)

                                                
                                    
x
+
TestFunctional/parallel/DashboardCmd (27.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/DashboardCmd
=== PAUSE TestFunctional/parallel/DashboardCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DashboardCmd
functional_test.go:901: (dbg) daemon: [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-059245 --alsologtostderr -v=1]
functional_test.go:906: (dbg) stopping [out/minikube-linux-amd64 dashboard --url --port 36195 -p functional-059245 --alsologtostderr -v=1] ...
helpers_test.go:508: unable to kill pid 959217: os: process already finished
--- PASS: TestFunctional/parallel/DashboardCmd (27.25s)

                                                
                                    
x
+
TestFunctional/parallel/DryRun (0.34s)

                                                
                                                
=== RUN   TestFunctional/parallel/DryRun
=== PAUSE TestFunctional/parallel/DryRun

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DryRun
functional_test.go:970: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059245 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:970: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-059245 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (171.750508ms)

                                                
                                                
-- stdout --
	* [functional-059245] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on existing profile
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:17:27.454957  958946 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:17:27.455084  958946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:17:27.455097  958946 out.go:304] Setting ErrFile to fd 2...
	I0314 18:17:27.455103  958946 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:17:27.455340  958946 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:17:27.456010  958946 out.go:298] Setting JSON to false
	I0314 18:17:27.457189  958946 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":93599,"bootTime":1710346648,"procs":267,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:17:27.457258  958946 start.go:139] virtualization: kvm guest
	I0314 18:17:27.459271  958946 out.go:177] * [functional-059245] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 18:17:27.461509  958946 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:17:27.463081  958946 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:17:27.461467  958946 notify.go:220] Checking for updates...
	I0314 18:17:27.466074  958946 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:17:27.467372  958946 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:17:27.468810  958946 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:17:27.470066  958946 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:17:27.471828  958946 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:17:27.472240  958946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:17:27.472296  958946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:17:27.489168  958946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39025
	I0314 18:17:27.489562  958946 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:17:27.490134  958946 main.go:141] libmachine: Using API Version  1
	I0314 18:17:27.490503  958946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:17:27.491022  958946 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:17:27.491286  958946 main.go:141] libmachine: (functional-059245) Calling .DriverName
	I0314 18:17:27.491547  958946 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:17:27.491906  958946 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:17:27.491950  958946 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:17:27.508831  958946 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37719
	I0314 18:17:27.509255  958946 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:17:27.509809  958946 main.go:141] libmachine: Using API Version  1
	I0314 18:17:27.509836  958946 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:17:27.510228  958946 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:17:27.510420  958946 main.go:141] libmachine: (functional-059245) Calling .DriverName
	I0314 18:17:27.546197  958946 out.go:177] * Using the kvm2 driver based on existing profile
	I0314 18:17:27.547362  958946 start.go:297] selected driver: kvm2
	I0314 18:17:27.547395  958946 start.go:901] validating driver "kvm2" against &{Name:functional-059245 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-059245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:17:27.547564  958946 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:17:27.549668  958946 out.go:177] 
	W0314 18:17:27.550797  958946 out.go:239] X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	X Exiting due to RSRC_INSUFFICIENT_REQ_MEMORY: Requested memory allocation 250MiB is less than the usable minimum of 1800MB
	I0314 18:17:27.551888  958946 out.go:177] 

                                                
                                                
** /stderr **
functional_test.go:987: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059245 --dry-run --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
--- PASS: TestFunctional/parallel/DryRun (0.34s)

                                                
                                    
x
+
TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/InternationalLanguage
=== PAUSE TestFunctional/parallel/InternationalLanguage

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/InternationalLanguage
functional_test.go:1016: (dbg) Run:  out/minikube-linux-amd64 start -p functional-059245 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio
functional_test.go:1016: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p functional-059245 --dry-run --memory 250MB --alsologtostderr --driver=kvm2  --container-runtime=crio: exit status 23 (171.930611ms)

                                                
                                                
-- stdout --
	* [functional-059245] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Utilisation du pilote kvm2 basé sur le profil existant
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:17:27.402871  958932 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:17:27.403019  958932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:17:27.403039  958932 out.go:304] Setting ErrFile to fd 2...
	I0314 18:17:27.403050  958932 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:17:27.403334  958932 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:17:27.403854  958932 out.go:298] Setting JSON to false
	I0314 18:17:27.404941  958932 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":93599,"bootTime":1710346648,"procs":263,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 18:17:27.405039  958932 start.go:139] virtualization: kvm guest
	I0314 18:17:27.407446  958932 out.go:177] * [functional-059245] minikube v1.32.0 sur Ubuntu 20.04 (kvm/amd64)
	I0314 18:17:27.409012  958932 notify.go:220] Checking for updates...
	I0314 18:17:27.409029  958932 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 18:17:27.410419  958932 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 18:17:27.411753  958932 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 18:17:27.413195  958932 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 18:17:27.416362  958932 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 18:17:27.418364  958932 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 18:17:27.419944  958932 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:17:27.420488  958932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:17:27.420537  958932 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:17:27.438548  958932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38021
	I0314 18:17:27.438989  958932 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:17:27.439941  958932 main.go:141] libmachine: Using API Version  1
	I0314 18:17:27.439959  958932 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:17:27.440423  958932 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:17:27.440745  958932 main.go:141] libmachine: (functional-059245) Calling .DriverName
	I0314 18:17:27.441007  958932 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 18:17:27.441424  958932 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:17:27.441464  958932 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:17:27.457879  958932 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34269
	I0314 18:17:27.458662  958932 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:17:27.459497  958932 main.go:141] libmachine: Using API Version  1
	I0314 18:17:27.459521  958932 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:17:27.460035  958932 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:17:27.460205  958932 main.go:141] libmachine: (functional-059245) Calling .DriverName
	I0314 18:17:27.494410  958932 out.go:177] * Utilisation du pilote kvm2 basé sur le profil existant
	I0314 18:17:27.495690  958932 start.go:297] selected driver: kvm2
	I0314 18:17:27.495706  958932 start.go:901] validating driver "kvm2" against &{Name:functional-059245 KeepContext:false EmbedCerts:false MinikubeISO:https://storage.googleapis.com/minikube-builds/iso/18375/minikube-v1.32.1-1710348681-18375-amd64.iso KicBaseImage:gcr.io/k8s-minikube/kicbase-builds:v0.0.42-1710284843-18375@sha256:d67c38c9fc2ad14c48d95e17cbac49314325db5758d8f7b3de60b927e62ce94f Memory:4000 CPUs:2 DiskSize:20000 Driver:kvm2 HyperkitVpnKitSock: HyperkitVSockPorts:[] DockerEnv:[] ContainerVolumeMounts:[] InsecureRegistry:[] RegistryMirror:[] HostOnlyCIDR:192.168.59.1/24 HypervVirtualSwitch: HypervUseExternalSwitch:false HypervExternalAdapter: KVMNetwork:default KVMQemuURI:qemu:///system KVMGPU:false KVMHidden:false KVMNUMACount:1 APIServerPort:8441 DockerOpt:[] DisableDriverMounts:false NFSShare:[] NFSSharesRoot:/nfsshares UUID: NoVTXCheck:false DNSProxy:false HostDNSResolver:true HostOnlyNicType:virtio NatNicType:virtio SSHIPAddress: SSHUser:root SSHKey: SSHPort:22 KubernetesConfig:{Kuber
netesVersion:v1.28.4 ClusterName:functional-059245 Namespace:default APIServerHAVIP: APIServerName:minikubeCA APIServerNames:[] APIServerIPs:[] DNSDomain:cluster.local ContainerRuntime:crio CRISocket: NetworkPlugin:cni FeatureGates: ServiceCIDR:10.96.0.0/12 ImageRepository: LoadBalancerStartIP: LoadBalancerEndIP: CustomIngressCert: RegistryAliases: ExtraOptions:[{Component:apiserver Key:enable-admission-plugins Value:NamespaceAutoProvision}] ShouldLoadCachedImages:true EnableDefaultCNI:false CNI:} Nodes:[{Name: IP:192.168.39.220 Port:8441 KubernetesVersion:v1.28.4 ContainerRuntime:crio ControlPlane:true Worker:true}] Addons:map[default-storageclass:true storage-provisioner:true] CustomAddonImages:map[] CustomAddonRegistries:map[] VerifyComponents:map[apiserver:true apps_running:true default_sa:true extra:true kubelet:true node_ready:true system_pods:true] StartHostTimeout:6m0s ScheduledStop:<nil> ExposedPorts:[] ListenAddress: Network: Subnet: MultiNodeRequested:false ExtraDisks:0 CertExpiration:26280h0m0s Mo
unt:false MountString:/home/jenkins:/minikube-host Mount9PVersion:9p2000.L MountGID:docker MountIP: MountMSize:262144 MountOptions:[] MountPort:0 MountType:9p MountUID:docker BinaryMirror: DisableOptimizations:false DisableMetrics:false CustomQemuFirmwarePath: SocketVMnetClientPath: SocketVMnetPath: StaticIP: SSHAuthSock: SSHAgentPID:0 GPUs: AutoPauseInterval:1m0s}
	I0314 18:17:27.495833  958932 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 18:17:27.497805  958932 out.go:177] 
	W0314 18:17:27.498929  958932 out.go:239] X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	X Fermeture en raison de RSRC_INSUFFICIENT_REQ_MEMORY : L'allocation de mémoire demandée 250 Mio est inférieure au minimum utilisable de 1800 Mo
	I0314 18:17:27.499991  958932 out.go:177] 

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/InternationalLanguage (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/StatusCmd
=== PAUSE TestFunctional/parallel/StatusCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/StatusCmd
functional_test.go:850: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 status
functional_test.go:856: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 status -f host:{{.Host}},kublet:{{.Kubelet}},apiserver:{{.APIServer}},kubeconfig:{{.Kubeconfig}}
functional_test.go:868: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 status -o json
--- PASS: TestFunctional/parallel/StatusCmd (1.12s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmdConnect (11.62s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmdConnect
=== PAUSE TestFunctional/parallel/ServiceCmdConnect

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ServiceCmdConnect
functional_test.go:1625: (dbg) Run:  kubectl --context functional-059245 create deployment hello-node-connect --image=registry.k8s.io/echoserver:1.8
E0314 18:17:14.933504  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
functional_test.go:1631: (dbg) Run:  kubectl --context functional-059245 expose deployment hello-node-connect --type=NodePort --port=8080
E0314 18:17:15.014186  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: waiting 10m0s for pods matching "app=hello-node-connect" in namespace "default" ...
helpers_test.go:344: "hello-node-connect-55497b8b78-j7hxn" [34f5571d-8fd8-4cdf-90cd-f6b5bccd5a30] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-connect-55497b8b78-j7hxn" [34f5571d-8fd8-4cdf-90cd-f6b5bccd5a30] Running
functional_test.go:1636: (dbg) TestFunctional/parallel/ServiceCmdConnect: app=hello-node-connect healthy within 11.004438716s
functional_test.go:1645: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 service hello-node-connect --url
functional_test.go:1651: found endpoint for hello-node-connect: http://192.168.39.220:31996
functional_test.go:1671: http://192.168.39.220:31996: success! body:

                                                
                                                

                                                
                                                
Hostname: hello-node-connect-55497b8b78-j7hxn

                                                
                                                
Pod Information:
	-no pod information available-

                                                
                                                
Server values:
	server_version=nginx: 1.13.3 - lua: 10008

                                                
                                                
Request Information:
	client_address=10.244.0.1
	method=GET
	real path=/
	query=
	request_version=1.1
	request_uri=http://192.168.39.220:8080/

                                                
                                                
Request Headers:
	accept-encoding=gzip
	host=192.168.39.220:31996
	user-agent=Go-http-client/1.1

                                                
                                                
Request Body:
	-no body in request-

                                                
                                                
--- PASS: TestFunctional/parallel/ServiceCmdConnect (11.62s)

                                                
                                    
x
+
TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                                
=== RUN   TestFunctional/parallel/AddonsCmd
=== PAUSE TestFunctional/parallel/AddonsCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/AddonsCmd
functional_test.go:1686: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 addons list
E0314 18:17:14.870960  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
E0314 18:17:14.891247  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
functional_test.go:1698: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 addons list -o json
--- PASS: TestFunctional/parallel/AddonsCmd (0.17s)

                                                
                                    
x
+
TestFunctional/parallel/PersistentVolumeClaim (44.58s)

                                                
                                                
=== RUN   TestFunctional/parallel/PersistentVolumeClaim
=== PAUSE TestFunctional/parallel/PersistentVolumeClaim

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PersistentVolumeClaim
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 4m0s for pods matching "integration-test=storage-provisioner" in namespace "kube-system" ...
helpers_test.go:344: "storage-provisioner" [efc542f6-4b80-4b0a-abad-ea25e6269e7e] Running
functional_test_pvc_test.go:44: (dbg) TestFunctional/parallel/PersistentVolumeClaim: integration-test=storage-provisioner healthy within 6.004984155s
functional_test_pvc_test.go:49: (dbg) Run:  kubectl --context functional-059245 get storageclass -o=json
functional_test_pvc_test.go:69: (dbg) Run:  kubectl --context functional-059245 apply -f testdata/storage-provisioner/pvc.yaml
functional_test_pvc_test.go:76: (dbg) Run:  kubectl --context functional-059245 get pvc myclaim -o=json
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-059245 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [d7493a54-53a2-442b-ab02-ad9d2a8a169b] Pending
helpers_test.go:344: "sp-pod" [d7493a54-53a2-442b-ab02-ad9d2a8a169b] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [d7493a54-53a2-442b-ab02-ad9d2a8a169b] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 13.003977267s
functional_test_pvc_test.go:100: (dbg) Run:  kubectl --context functional-059245 exec sp-pod -- touch /tmp/mount/foo
functional_test_pvc_test.go:106: (dbg) Run:  kubectl --context functional-059245 delete -f testdata/storage-provisioner/pod.yaml
E0314 18:17:35.338512  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
functional_test_pvc_test.go:106: (dbg) Done: kubectl --context functional-059245 delete -f testdata/storage-provisioner/pod.yaml: (3.656881314s)
functional_test_pvc_test.go:125: (dbg) Run:  kubectl --context functional-059245 apply -f testdata/storage-provisioner/pod.yaml
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: waiting 3m0s for pods matching "test=storage-provisioner" in namespace "default" ...
helpers_test.go:344: "sp-pod" [7da48511-c8ca-4ebb-91b8-1535fbee3a3d] Pending
helpers_test.go:344: "sp-pod" [7da48511-c8ca-4ebb-91b8-1535fbee3a3d] Pending / Ready:ContainersNotReady (containers with unready status: [myfrontend]) / ContainersReady:ContainersNotReady (containers with unready status: [myfrontend])
helpers_test.go:344: "sp-pod" [7da48511-c8ca-4ebb-91b8-1535fbee3a3d] Running
functional_test_pvc_test.go:130: (dbg) TestFunctional/parallel/PersistentVolumeClaim: test=storage-provisioner healthy within 21.006455148s
functional_test_pvc_test.go:114: (dbg) Run:  kubectl --context functional-059245 exec sp-pod -- ls /tmp/mount
--- PASS: TestFunctional/parallel/PersistentVolumeClaim (44.58s)

                                                
                                    
x
+
TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                                
=== RUN   TestFunctional/parallel/SSHCmd
=== PAUSE TestFunctional/parallel/SSHCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/SSHCmd
functional_test.go:1721: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "echo hello"
functional_test.go:1738: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "cat /etc/hostname"
--- PASS: TestFunctional/parallel/SSHCmd (0.47s)

                                                
                                    
x
+
TestFunctional/parallel/CpCmd (1.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/CpCmd
=== PAUSE TestFunctional/parallel/CpCmd

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CpCmd
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 cp testdata/cp-test.txt /home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh -n functional-059245 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 cp functional-059245:/home/docker/cp-test.txt /tmp/TestFunctionalparallelCpCmd3206473713/001/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh -n functional-059245 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 cp testdata/cp-test.txt /tmp/does/not/exist/cp-test.txt
E0314 18:17:15.495409  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh -n functional-059245 "sudo cat /tmp/does/not/exist/cp-test.txt"
--- PASS: TestFunctional/parallel/CpCmd (1.50s)

                                                
                                    
x
+
TestFunctional/parallel/MySQL (25.56s)

                                                
                                                
=== RUN   TestFunctional/parallel/MySQL
=== PAUSE TestFunctional/parallel/MySQL

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/MySQL
functional_test.go:1789: (dbg) Run:  kubectl --context functional-059245 replace --force -f testdata/mysql.yaml
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: waiting 10m0s for pods matching "app=mysql" in namespace "default" ...
helpers_test.go:344: "mysql-859648c796-zs2sc" [fe20b5b8-8b1f-4acd-ad47-71a4a578d421] Pending / Ready:ContainersNotReady (containers with unready status: [mysql]) / ContainersReady:ContainersNotReady (containers with unready status: [mysql])
helpers_test.go:344: "mysql-859648c796-zs2sc" [fe20b5b8-8b1f-4acd-ad47-71a4a578d421] Running
functional_test.go:1795: (dbg) TestFunctional/parallel/MySQL: app=mysql healthy within 22.005480569s
functional_test.go:1803: (dbg) Run:  kubectl --context functional-059245 exec mysql-859648c796-zs2sc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-059245 exec mysql-859648c796-zs2sc -- mysql -ppassword -e "show databases;": exit status 1 (215.021314ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-059245 exec mysql-859648c796-zs2sc -- mysql -ppassword -e "show databases;"
functional_test.go:1803: (dbg) Non-zero exit: kubectl --context functional-059245 exec mysql-859648c796-zs2sc -- mysql -ppassword -e "show databases;": exit status 1 (208.911012ms)

                                                
                                                
** stderr ** 
	mysql: [Warning] Using a password on the command line interface can be insecure.
	ERROR 2002 (HY000): Can't connect to local MySQL server through socket '/var/run/mysqld/mysqld.sock' (2)
	command terminated with exit code 1

                                                
                                                
** /stderr **
functional_test.go:1803: (dbg) Run:  kubectl --context functional-059245 exec mysql-859648c796-zs2sc -- mysql -ppassword -e "show databases;"
--- PASS: TestFunctional/parallel/MySQL (25.56s)

                                                
                                    
x
+
TestFunctional/parallel/FileSync (0.25s)

                                                
                                                
=== RUN   TestFunctional/parallel/FileSync
=== PAUSE TestFunctional/parallel/FileSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/FileSync
functional_test.go:1925: Checking for existence of /etc/test/nested/copy/951311/hosts within VM
functional_test.go:1927: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "sudo cat /etc/test/nested/copy/951311/hosts"
functional_test.go:1932: file sync test content: Test file for checking file sync process
--- PASS: TestFunctional/parallel/FileSync (0.25s)

                                                
                                    
x
+
TestFunctional/parallel/CertSync (1.46s)

                                                
                                                
=== RUN   TestFunctional/parallel/CertSync
=== PAUSE TestFunctional/parallel/CertSync

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/CertSync
functional_test.go:1968: Checking for existence of /etc/ssl/certs/951311.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "sudo cat /etc/ssl/certs/951311.pem"
functional_test.go:1968: Checking for existence of /usr/share/ca-certificates/951311.pem within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "sudo cat /usr/share/ca-certificates/951311.pem"
functional_test.go:1968: Checking for existence of /etc/ssl/certs/51391683.0 within VM
functional_test.go:1969: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "sudo cat /etc/ssl/certs/51391683.0"
2024/03/14 18:17:54 [DEBUG] GET http://127.0.0.1:36195/api/v1/namespaces/kubernetes-dashboard/services/http:kubernetes-dashboard:/proxy/
functional_test.go:1995: Checking for existence of /etc/ssl/certs/9513112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "sudo cat /etc/ssl/certs/9513112.pem"
functional_test.go:1995: Checking for existence of /usr/share/ca-certificates/9513112.pem within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "sudo cat /usr/share/ca-certificates/9513112.pem"
functional_test.go:1995: Checking for existence of /etc/ssl/certs/3ec20f2e.0 within VM
functional_test.go:1996: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "sudo cat /etc/ssl/certs/3ec20f2e.0"
--- PASS: TestFunctional/parallel/CertSync (1.46s)

                                                
                                    
x
+
TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestFunctional/parallel/NodeLabels
=== PAUSE TestFunctional/parallel/NodeLabels

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NodeLabels
functional_test.go:218: (dbg) Run:  kubectl --context functional-059245 get nodes --output=go-template "--template='{{range $k, $v := (index .items 0).metadata.labels}}{{$k}} {{end}}'"
--- PASS: TestFunctional/parallel/NodeLabels (0.07s)

                                                
                                    
x
+
TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                                
=== RUN   TestFunctional/parallel/NonActiveRuntimeDisabled
=== PAUSE TestFunctional/parallel/NonActiveRuntimeDisabled

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/NonActiveRuntimeDisabled
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "sudo systemctl is-active docker"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059245 ssh "sudo systemctl is-active docker": exit status 1 (251.946627ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
functional_test.go:2023: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "sudo systemctl is-active containerd"
functional_test.go:2023: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059245 ssh "sudo systemctl is-active containerd": exit status 1 (277.668905ms)

                                                
                                                
-- stdout --
	inactive

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestFunctional/parallel/NonActiveRuntimeDisabled (0.53s)

                                                
                                    
x
+
TestFunctional/parallel/License (0.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/License
=== PAUSE TestFunctional/parallel/License

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/License
functional_test.go:2284: (dbg) Run:  out/minikube-linux-amd64 license
--- PASS: TestFunctional/parallel/License (0.16s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/DeployApp (10.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/DeployApp
functional_test.go:1435: (dbg) Run:  kubectl --context functional-059245 create deployment hello-node --image=registry.k8s.io/echoserver:1.8
functional_test.go:1441: (dbg) Run:  kubectl --context functional-059245 expose deployment hello-node --type=NodePort --port=8080
E0314 18:17:15.174857  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: waiting 10m0s for pods matching "app=hello-node" in namespace "default" ...
helpers_test.go:344: "hello-node-d7447cc7f-thqx2" [91fa0f19-b26b-4155-a79b-d6c60a77dcd3] Pending / Ready:ContainersNotReady (containers with unready status: [echoserver]) / ContainersReady:ContainersNotReady (containers with unready status: [echoserver])
helpers_test.go:344: "hello-node-d7447cc7f-thqx2" [91fa0f19-b26b-4155-a79b-d6c60a77dcd3] Running
functional_test.go:1446: (dbg) TestFunctional/parallel/ServiceCmd/DeployApp: app=hello-node healthy within 10.005210656s
--- PASS: TestFunctional/parallel/ServiceCmd/DeployApp (10.20s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_not_create
functional_test.go:1266: (dbg) Run:  out/minikube-linux-amd64 profile lis
functional_test.go:1271: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
E0314 18:17:16.136323  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ProfileCmd/profile_not_create (0.39s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_list
functional_test.go:1306: (dbg) Run:  out/minikube-linux-amd64 profile list
functional_test.go:1311: Took "336.24544ms" to run "out/minikube-linux-amd64 profile list"
functional_test.go:1320: (dbg) Run:  out/minikube-linux-amd64 profile list -l
functional_test.go:1325: Took "71.346704ms" to run "out/minikube-linux-amd64 profile list -l"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_list (0.41s)

                                                
                                    
x
+
TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                                
=== RUN   TestFunctional/parallel/ProfileCmd/profile_json_output
functional_test.go:1357: (dbg) Run:  out/minikube-linux-amd64 profile list -o json
functional_test.go:1362: Took "302.52371ms" to run "out/minikube-linux-amd64 profile list -o json"
functional_test.go:1370: (dbg) Run:  out/minikube-linux-amd64 profile list -o json --light
functional_test.go:1375: Took "72.969213ms" to run "out/minikube-linux-amd64 profile list -o json --light"
--- PASS: TestFunctional/parallel/ProfileCmd/profile_json_output (0.38s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/any-port (6.67s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/any-port
functional_test_mount_test.go:73: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059245 /tmp/TestFunctionalparallelMountCmdany-port3774248900/001:/mount-9p --alsologtostderr -v=1]
functional_test_mount_test.go:107: wrote "test-1710440237119366930" to /tmp/TestFunctionalparallelMountCmdany-port3774248900/001/created-by-test
functional_test_mount_test.go:107: wrote "test-1710440237119366930" to /tmp/TestFunctionalparallelMountCmdany-port3774248900/001/created-by-test-removed-by-pod
functional_test_mount_test.go:107: wrote "test-1710440237119366930" to /tmp/TestFunctionalparallelMountCmdany-port3774248900/001/test-1710440237119366930
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:115: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059245 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (248.360366ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
E0314 18:17:17.416941  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
functional_test_mount_test.go:115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:129: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh -- ls -la /mount-9p
functional_test_mount_test.go:133: guest mount directory contents
total 2
-rw-r--r-- 1 docker docker 24 Mar 14 18:17 created-by-test
-rw-r--r-- 1 docker docker 24 Mar 14 18:17 created-by-test-removed-by-pod
-rw-r--r-- 1 docker docker 24 Mar 14 18:17 test-1710440237119366930
functional_test_mount_test.go:137: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh cat /mount-9p/test-1710440237119366930
functional_test_mount_test.go:148: (dbg) Run:  kubectl --context functional-059245 replace --force -f testdata/busybox-mount-test.yaml
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: waiting 4m0s for pods matching "integration-test=busybox-mount" in namespace "default" ...
helpers_test.go:344: "busybox-mount" [542990d3-6a36-402c-93e5-614d180bd4a3] Pending
helpers_test.go:344: "busybox-mount" [542990d3-6a36-402c-93e5-614d180bd4a3] Pending / Ready:ContainersNotReady (containers with unready status: [mount-munger]) / ContainersReady:ContainersNotReady (containers with unready status: [mount-munger])
E0314 18:17:19.977285  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
helpers_test.go:344: "busybox-mount" [542990d3-6a36-402c-93e5-614d180bd4a3] Pending: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
helpers_test.go:344: "busybox-mount" [542990d3-6a36-402c-93e5-614d180bd4a3] Succeeded: Initialized:PodCompleted / Ready:PodCompleted / ContainersReady:PodCompleted
functional_test_mount_test.go:153: (dbg) TestFunctional/parallel/MountCmd/any-port: integration-test=busybox-mount healthy within 4.003787319s
functional_test_mount_test.go:169: (dbg) Run:  kubectl --context functional-059245 logs busybox-mount
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh stat /mount-9p/created-by-test
functional_test_mount_test.go:181: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh stat /mount-9p/created-by-pod
functional_test_mount_test.go:90: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:94: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059245 /tmp/TestFunctionalparallelMountCmdany-port3774248900/001:/mount-9p --alsologtostderr -v=1] ...
--- PASS: TestFunctional/parallel/MountCmd/any-port (6.67s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/specific-port
functional_test_mount_test.go:213: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059245 /tmp/TestFunctionalparallelMountCmdspecific-port2701189907/001:/mount-9p --alsologtostderr -v=1 --port 46464]
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:243: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059245 ssh "findmnt -T /mount-9p | grep 9p": exit status 1 (243.810989ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:243: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "findmnt -T /mount-9p | grep 9p"
functional_test_mount_test.go:257: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh -- ls -la /mount-9p
E0314 18:17:25.098330  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
functional_test_mount_test.go:261: guest mount directory contents
total 0
functional_test_mount_test.go:263: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059245 /tmp/TestFunctionalparallelMountCmdspecific-port2701189907/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
functional_test_mount_test.go:264: reading mount text
functional_test_mount_test.go:278: done reading mount text
functional_test_mount_test.go:230: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "sudo umount -f /mount-9p"
functional_test_mount_test.go:230: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059245 ssh "sudo umount -f /mount-9p": exit status 1 (293.017624ms)

                                                
                                                
-- stdout --
	umount: /mount-9p: not mounted.

                                                
                                                
-- /stdout --
** stderr ** 
	ssh: Process exited with status 32

                                                
                                                
** /stderr **
functional_test_mount_test.go:232: "out/minikube-linux-amd64 -p functional-059245 ssh \"sudo umount -f /mount-9p\"": exit status 1
functional_test_mount_test.go:234: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059245 /tmp/TestFunctionalparallelMountCmdspecific-port2701189907/001:/mount-9p --alsologtostderr -v=1 --port 46464] ...
--- PASS: TestFunctional/parallel/MountCmd/specific-port (2.03s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/List
functional_test.go:1455: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 service list
--- PASS: TestFunctional/parallel/ServiceCmd/List (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/JSONOutput (0.5s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/JSONOutput
functional_test.go:1485: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 service list -o json
functional_test.go:1490: Took "500.939923ms" to run "out/minikube-linux-amd64 -p functional-059245 service list -o json"
--- PASS: TestFunctional/parallel/ServiceCmd/JSONOutput (0.50s)

                                                
                                    
x
+
TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                                
=== RUN   TestFunctional/parallel/MountCmd/VerifyCleanup
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059245 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3603304916/001:/mount1 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059245 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3603304916/001:/mount2 --alsologtostderr -v=1]
functional_test_mount_test.go:298: (dbg) daemon: [out/minikube-linux-amd64 mount -p functional-059245 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3603304916/001:/mount3 --alsologtostderr -v=1]
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059245 ssh "findmnt -T" /mount1: exit status 1 (280.133298ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "findmnt -T" /mount1
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "findmnt -T" /mount2
functional_test_mount_test.go:325: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh "findmnt -T" /mount3
functional_test_mount_test.go:370: (dbg) Run:  out/minikube-linux-amd64 mount -p functional-059245 --kill=true
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059245 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3603304916/001:/mount1 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059245 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3603304916/001:/mount2 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
functional_test_mount_test.go:313: (dbg) stopping [out/minikube-linux-amd64 mount -p functional-059245 /tmp/TestFunctionalparallelMountCmdVerifyCleanup3603304916/001:/mount3 --alsologtostderr -v=1] ...
helpers_test.go:490: unable to find parent, assuming dead: process does not exist
--- PASS: TestFunctional/parallel/MountCmd/VerifyCleanup (1.57s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/HTTPS
functional_test.go:1505: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 service --namespace=default --https --url hello-node
functional_test.go:1518: found endpoint: https://192.168.39.220:30386
--- PASS: TestFunctional/parallel/ServiceCmd/HTTPS (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/Format
functional_test.go:1536: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 service hello-node --url --format={{.IP}}
--- PASS: TestFunctional/parallel/ServiceCmd/Format (0.44s)

                                                
                                    
x
+
TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                                
=== RUN   TestFunctional/parallel/ServiceCmd/URL
functional_test.go:1555: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 service hello-node --url
functional_test.go:1561: found endpoint for hello-node: http://192.168.39.220:30386
--- PASS: TestFunctional/parallel/ServiceCmd/URL (0.36s)

                                                
                                    
x
+
TestFunctional/parallel/Version/short (0.06s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/short
=== PAUSE TestFunctional/parallel/Version/short

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/short
functional_test.go:2252: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 version --short
--- PASS: TestFunctional/parallel/Version/short (0.06s)

                                                
                                    
x
+
TestFunctional/parallel/Version/components (0.74s)

                                                
                                                
=== RUN   TestFunctional/parallel/Version/components
=== PAUSE TestFunctional/parallel/Version/components

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/Version/components
functional_test.go:2266: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 version -o=json --components
--- PASS: TestFunctional/parallel/Version/components (0.74s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListShort
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListShort

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListShort
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image ls --format short --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059245 image ls --format short --alsologtostderr:
registry.k8s.io/pause:latest
registry.k8s.io/pause:3.9
registry.k8s.io/pause:3.3
registry.k8s.io/pause:3.1
registry.k8s.io/kube-scheduler:v1.28.4
registry.k8s.io/kube-proxy:v1.28.4
registry.k8s.io/kube-controller-manager:v1.28.4
registry.k8s.io/kube-apiserver:v1.28.4
registry.k8s.io/etcd:3.5.9-0
registry.k8s.io/echoserver:1.8
registry.k8s.io/coredns/coredns:v1.10.1
localhost/minikube-local-cache-test:functional-059245
gcr.io/k8s-minikube/storage-provisioner:v5
gcr.io/k8s-minikube/busybox:1.28.4-glibc
gcr.io/google-containers/addon-resizer:functional-059245
docker.io/library/nginx:latest
docker.io/library/mysql:5.7
docker.io/kindest/kindnetd:v20230809-80a64d96
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059245 image ls --format short --alsologtostderr:
I0314 18:17:58.926414  960430 out.go:291] Setting OutFile to fd 1 ...
I0314 18:17:58.926772  960430 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:17:58.926808  960430 out.go:304] Setting ErrFile to fd 2...
I0314 18:17:58.926823  960430 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:17:58.927149  960430 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
I0314 18:17:58.927953  960430 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0314 18:17:58.928078  960430 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0314 18:17:58.928489  960430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0314 18:17:58.928556  960430 main.go:141] libmachine: Launching plugin server for driver kvm2
I0314 18:17:58.945094  960430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41159
I0314 18:17:58.945958  960430 main.go:141] libmachine: () Calling .GetVersion
I0314 18:17:58.946625  960430 main.go:141] libmachine: Using API Version  1
I0314 18:17:58.946644  960430 main.go:141] libmachine: () Calling .SetConfigRaw
I0314 18:17:58.947071  960430 main.go:141] libmachine: () Calling .GetMachineName
I0314 18:17:58.947309  960430 main.go:141] libmachine: (functional-059245) Calling .GetState
I0314 18:17:58.949476  960430 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0314 18:17:58.949524  960430 main.go:141] libmachine: Launching plugin server for driver kvm2
I0314 18:17:58.963470  960430 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:36789
I0314 18:17:58.964151  960430 main.go:141] libmachine: () Calling .GetVersion
I0314 18:17:58.964704  960430 main.go:141] libmachine: Using API Version  1
I0314 18:17:58.964720  960430 main.go:141] libmachine: () Calling .SetConfigRaw
I0314 18:17:58.965283  960430 main.go:141] libmachine: () Calling .GetMachineName
I0314 18:17:58.965567  960430 main.go:141] libmachine: (functional-059245) Calling .DriverName
I0314 18:17:58.965770  960430 ssh_runner.go:195] Run: systemctl --version
I0314 18:17:58.965791  960430 main.go:141] libmachine: (functional-059245) Calling .GetSSHHostname
I0314 18:17:58.970210  960430 main.go:141] libmachine: (functional-059245) DBG | domain functional-059245 has defined MAC address 52:54:00:20:9c:6c in network mk-functional-059245
I0314 18:17:58.970682  960430 main.go:141] libmachine: (functional-059245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:9c:6c", ip: ""} in network mk-functional-059245: {Iface:virbr1 ExpiryTime:2024-03-14 19:14:25 +0000 UTC Type:0 Mac:52:54:00:20:9c:6c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-059245 Clientid:01:52:54:00:20:9c:6c}
I0314 18:17:58.970707  960430 main.go:141] libmachine: (functional-059245) DBG | domain functional-059245 has defined IP address 192.168.39.220 and MAC address 52:54:00:20:9c:6c in network mk-functional-059245
I0314 18:17:58.970963  960430 main.go:141] libmachine: (functional-059245) Calling .GetSSHPort
I0314 18:17:58.971116  960430 main.go:141] libmachine: (functional-059245) Calling .GetSSHKeyPath
I0314 18:17:58.971264  960430 main.go:141] libmachine: (functional-059245) Calling .GetSSHUsername
I0314 18:17:58.971360  960430 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/functional-059245/id_rsa Username:docker}
I0314 18:17:59.092171  960430 ssh_runner.go:195] Run: sudo crictl images --output json
I0314 18:17:59.189804  960430 main.go:141] libmachine: Making call to close driver server
I0314 18:17:59.189823  960430 main.go:141] libmachine: (functional-059245) Calling .Close
I0314 18:17:59.190597  960430 main.go:141] libmachine: (functional-059245) DBG | Closing plugin on server side
I0314 18:17:59.190645  960430 main.go:141] libmachine: Successfully made call to close driver server
I0314 18:17:59.190663  960430 main.go:141] libmachine: Making call to close connection to plugin binary
I0314 18:17:59.190673  960430 main.go:141] libmachine: Making call to close driver server
I0314 18:17:59.190684  960430 main.go:141] libmachine: (functional-059245) Calling .Close
I0314 18:17:59.191967  960430 main.go:141] libmachine: Successfully made call to close driver server
I0314 18:17:59.191983  960430 main.go:141] libmachine: Making call to close connection to plugin binary
I0314 18:17:59.192021  960430 main.go:141] libmachine: (functional-059245) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListShort (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListTable
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListTable

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListTable
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image ls --format table --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059245 image ls --format table --alsologtostderr:
|-----------------------------------------|--------------------|---------------|--------|
|                  Image                  |        Tag         |   Image ID    |  Size  |
|-----------------------------------------|--------------------|---------------|--------|
| docker.io/library/nginx                 | latest             | 92b11f67642b6 | 191MB  |
| gcr.io/google-containers/addon-resizer  | functional-059245  | ffd4cfbbe753e | 34.1MB |
| registry.k8s.io/etcd                    | 3.5.9-0            | 73deb9a3f7025 | 295MB  |
| registry.k8s.io/kube-scheduler          | v1.28.4            | e3db313c6dbc0 | 61.6MB |
| registry.k8s.io/pause                   | latest             | 350b164e7ae1d | 247kB  |
| docker.io/library/mysql                 | 5.7                | 5107333e08a87 | 520MB  |
| registry.k8s.io/coredns/coredns         | v1.10.1            | ead0a4a53df89 | 53.6MB |
| registry.k8s.io/echoserver              | 1.8                | 82e4c8a736a4f | 97.8MB |
| registry.k8s.io/kube-controller-manager | v1.28.4            | d058aa5ab969c | 123MB  |
| registry.k8s.io/kube-proxy              | v1.28.4            | 83f6cc407eed8 | 74.7MB |
| docker.io/kindest/kindnetd              | v20230809-80a64d96 | c7d1297425461 | 65.3MB |
| gcr.io/k8s-minikube/storage-provisioner | v5                 | 6e38f40d628db | 31.5MB |
| registry.k8s.io/pause                   | 3.9                | e6f1816883972 | 750kB  |
| localhost/minikube-local-cache-test     | functional-059245  | 3a4400069a619 | 3.35kB |
| registry.k8s.io/pause                   | 3.1                | da86e6ba6ca19 | 747kB  |
| registry.k8s.io/pause                   | 3.3                | 0184c1613d929 | 686kB  |
| gcr.io/k8s-minikube/busybox             | 1.28.4-glibc       | 56cc512116c8f | 4.63MB |
| registry.k8s.io/kube-apiserver          | v1.28.4            | 7fe0e6f37db33 | 127MB  |
|-----------------------------------------|--------------------|---------------|--------|
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059245 image ls --format table --alsologtostderr:
I0314 18:17:59.173229  960511 out.go:291] Setting OutFile to fd 1 ...
I0314 18:17:59.173543  960511 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:17:59.173556  960511 out.go:304] Setting ErrFile to fd 2...
I0314 18:17:59.173562  960511 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:17:59.173872  960511 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
I0314 18:17:59.174701  960511 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0314 18:17:59.174863  960511 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0314 18:17:59.175460  960511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0314 18:17:59.175509  960511 main.go:141] libmachine: Launching plugin server for driver kvm2
I0314 18:17:59.198381  960511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:37249
I0314 18:17:59.198847  960511 main.go:141] libmachine: () Calling .GetVersion
I0314 18:17:59.199511  960511 main.go:141] libmachine: Using API Version  1
I0314 18:17:59.199545  960511 main.go:141] libmachine: () Calling .SetConfigRaw
I0314 18:17:59.199890  960511 main.go:141] libmachine: () Calling .GetMachineName
I0314 18:17:59.200122  960511 main.go:141] libmachine: (functional-059245) Calling .GetState
I0314 18:17:59.202515  960511 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0314 18:17:59.202564  960511 main.go:141] libmachine: Launching plugin server for driver kvm2
I0314 18:17:59.219015  960511 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46259
I0314 18:17:59.219436  960511 main.go:141] libmachine: () Calling .GetVersion
I0314 18:17:59.219939  960511 main.go:141] libmachine: Using API Version  1
I0314 18:17:59.219963  960511 main.go:141] libmachine: () Calling .SetConfigRaw
I0314 18:17:59.220368  960511 main.go:141] libmachine: () Calling .GetMachineName
I0314 18:17:59.220588  960511 main.go:141] libmachine: (functional-059245) Calling .DriverName
I0314 18:17:59.220865  960511 ssh_runner.go:195] Run: systemctl --version
I0314 18:17:59.220899  960511 main.go:141] libmachine: (functional-059245) Calling .GetSSHHostname
I0314 18:17:59.223585  960511 main.go:141] libmachine: (functional-059245) DBG | domain functional-059245 has defined MAC address 52:54:00:20:9c:6c in network mk-functional-059245
I0314 18:17:59.224008  960511 main.go:141] libmachine: (functional-059245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:9c:6c", ip: ""} in network mk-functional-059245: {Iface:virbr1 ExpiryTime:2024-03-14 19:14:25 +0000 UTC Type:0 Mac:52:54:00:20:9c:6c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-059245 Clientid:01:52:54:00:20:9c:6c}
I0314 18:17:59.224037  960511 main.go:141] libmachine: (functional-059245) DBG | domain functional-059245 has defined IP address 192.168.39.220 and MAC address 52:54:00:20:9c:6c in network mk-functional-059245
I0314 18:17:59.224259  960511 main.go:141] libmachine: (functional-059245) Calling .GetSSHPort
I0314 18:17:59.224448  960511 main.go:141] libmachine: (functional-059245) Calling .GetSSHKeyPath
I0314 18:17:59.224581  960511 main.go:141] libmachine: (functional-059245) Calling .GetSSHUsername
I0314 18:17:59.224695  960511 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/functional-059245/id_rsa Username:docker}
I0314 18:17:59.311610  960511 ssh_runner.go:195] Run: sudo crictl images --output json
I0314 18:17:59.354874  960511 main.go:141] libmachine: Making call to close driver server
I0314 18:17:59.354894  960511 main.go:141] libmachine: (functional-059245) Calling .Close
I0314 18:17:59.355181  960511 main.go:141] libmachine: Successfully made call to close driver server
I0314 18:17:59.355201  960511 main.go:141] libmachine: Making call to close connection to plugin binary
I0314 18:17:59.355205  960511 main.go:141] libmachine: (functional-059245) DBG | Closing plugin on server side
I0314 18:17:59.355211  960511 main.go:141] libmachine: Making call to close driver server
I0314 18:17:59.355302  960511 main.go:141] libmachine: (functional-059245) Calling .Close
I0314 18:17:59.355533  960511 main.go:141] libmachine: Successfully made call to close driver server
I0314 18:17:59.355546  960511 main.go:141] libmachine: Making call to close connection to plugin binary
I0314 18:17:59.355571  960511 main.go:141] libmachine: (functional-059245) DBG | Closing plugin on server side
--- PASS: TestFunctional/parallel/ImageCommands/ImageListTable (0.26s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListJson
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListJson

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListJson
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image ls --format json --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059245 image ls --format json --alsologtostderr:
[{"id":"07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558","repoDigests":["docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93","docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029"],"repoTags":[],"size":"249229937"},{"id":"73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9","repoDigests":["registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15","registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3"],"repoTags":["registry.k8s.io/etcd:3.5.9-0"],"size":"295456551"},{"id":"7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257","repoDigests":["registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499","registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb"],"repoTags":["registry.k8s.io/kube-ap
iserver:v1.28.4"],"size":"127226832"},{"id":"6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562","repoDigests":["gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944","gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f"],"repoTags":["gcr.io/k8s-minikube/storage-provisioner:v5"],"size":"31470524"},{"id":"da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e","repoDigests":["registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e"],"repoTags":["registry.k8s.io/pause:3.1"],"size":"746911"},{"id":"0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da","repoDigests":["registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04"],"repoTags":["registry.k8s.io/pause:3.3"],"size":"686139"},{"id":"d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591","repoDigests":["registry.k8s.io/k
ube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c","registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232"],"repoTags":["registry.k8s.io/kube-controller-manager:v1.28.4"],"size":"123261750"},{"id":"350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06","repoDigests":["registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9"],"repoTags":["registry.k8s.io/pause:latest"],"size":"247077"},{"id":"5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933","repoDigests":["docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb","docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da"],"repoTags":["docker.io/library/mysql:5.7"],"size":"519571821"},{"id":"92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e","repoDigests":["docker.io/library/nginx@sha256:52478f8cd6a142fd
462f0a7614a7bb064e969a4c083648235d6943c786df8cc7","docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e"],"repoTags":["docker.io/library/nginx:latest"],"size":"190865876"},{"id":"ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc","repoDigests":["registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e","registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378"],"repoTags":["registry.k8s.io/coredns/coredns:v1.10.1"],"size":"53621675"},{"id":"82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410","repoDigests":["registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969"],"repoTags":["registry.k8s.io/echoserver:1.8"],"size":"97846543"},{"id":"3a4400069a619468db1fafc44adb4da0af735e2dd3b6059db66bf1cf94a75f1e","repoDigests":["localhost/minikube-local-cache-test@sha256:906464e740e5765ca55e028df6c8c806dc074791490e973
b4eb71691c669b2bc"],"repoTags":["localhost/minikube-local-cache-test:functional-059245"],"size":"3345"},{"id":"83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e","repoDigests":["registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e","registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532"],"repoTags":["registry.k8s.io/kube-proxy:v1.28.4"],"size":"74749335"},{"id":"e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1","repoDigests":["registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba","registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32"],"repoTags":["registry.k8s.io/kube-scheduler:v1.28.4"],"size":"61551410"},{"id":"e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c","repoDigests":["registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097","reg
istry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10"],"repoTags":["registry.k8s.io/pause:3.9"],"size":"750414"},{"id":"c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc","repoDigests":["docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052","docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4"],"repoTags":["docker.io/kindest/kindnetd:v20230809-80a64d96"],"size":"65258016"},{"id":"115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7","repoDigests":["docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a","docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c"],"repoTags":[],"size":"43824855"},{"id":"ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91","repoDigests":["gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069ad
f654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126"],"repoTags":["gcr.io/google-containers/addon-resizer:functional-059245"],"size":"34114467"},{"id":"56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c","repoDigests":["gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e","gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998"],"repoTags":["gcr.io/k8s-minikube/busybox:1.28.4-glibc"],"size":"4631262"}]
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059245 image ls --format json --alsologtostderr:
I0314 18:17:58.922248  960431 out.go:291] Setting OutFile to fd 1 ...
I0314 18:17:58.922512  960431 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:17:58.922522  960431 out.go:304] Setting ErrFile to fd 2...
I0314 18:17:58.922527  960431 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:17:58.922739  960431 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
I0314 18:17:58.923293  960431 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0314 18:17:58.923397  960431 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0314 18:17:58.923796  960431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0314 18:17:58.923855  960431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0314 18:17:58.939510  960431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:33387
I0314 18:17:58.939969  960431 main.go:141] libmachine: () Calling .GetVersion
I0314 18:17:58.940708  960431 main.go:141] libmachine: Using API Version  1
I0314 18:17:58.940743  960431 main.go:141] libmachine: () Calling .SetConfigRaw
I0314 18:17:58.941132  960431 main.go:141] libmachine: () Calling .GetMachineName
I0314 18:17:58.941370  960431 main.go:141] libmachine: (functional-059245) Calling .GetState
I0314 18:17:58.943324  960431 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0314 18:17:58.943366  960431 main.go:141] libmachine: Launching plugin server for driver kvm2
I0314 18:17:58.960851  960431 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40261
I0314 18:17:58.961273  960431 main.go:141] libmachine: () Calling .GetVersion
I0314 18:17:58.961917  960431 main.go:141] libmachine: Using API Version  1
I0314 18:17:58.961945  960431 main.go:141] libmachine: () Calling .SetConfigRaw
I0314 18:17:58.962339  960431 main.go:141] libmachine: () Calling .GetMachineName
I0314 18:17:58.962533  960431 main.go:141] libmachine: (functional-059245) Calling .DriverName
I0314 18:17:58.962739  960431 ssh_runner.go:195] Run: systemctl --version
I0314 18:17:58.962769  960431 main.go:141] libmachine: (functional-059245) Calling .GetSSHHostname
I0314 18:17:58.966067  960431 main.go:141] libmachine: (functional-059245) DBG | domain functional-059245 has defined MAC address 52:54:00:20:9c:6c in network mk-functional-059245
I0314 18:17:58.966708  960431 main.go:141] libmachine: (functional-059245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:9c:6c", ip: ""} in network mk-functional-059245: {Iface:virbr1 ExpiryTime:2024-03-14 19:14:25 +0000 UTC Type:0 Mac:52:54:00:20:9c:6c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-059245 Clientid:01:52:54:00:20:9c:6c}
I0314 18:17:58.966731  960431 main.go:141] libmachine: (functional-059245) DBG | domain functional-059245 has defined IP address 192.168.39.220 and MAC address 52:54:00:20:9c:6c in network mk-functional-059245
I0314 18:17:58.966845  960431 main.go:141] libmachine: (functional-059245) Calling .GetSSHPort
I0314 18:17:58.967040  960431 main.go:141] libmachine: (functional-059245) Calling .GetSSHKeyPath
I0314 18:17:58.967197  960431 main.go:141] libmachine: (functional-059245) Calling .GetSSHUsername
I0314 18:17:58.967335  960431 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/functional-059245/id_rsa Username:docker}
I0314 18:17:59.058359  960431 ssh_runner.go:195] Run: sudo crictl images --output json
I0314 18:17:59.156773  960431 main.go:141] libmachine: Making call to close driver server
I0314 18:17:59.156791  960431 main.go:141] libmachine: (functional-059245) Calling .Close
I0314 18:17:59.157138  960431 main.go:141] libmachine: Successfully made call to close driver server
I0314 18:17:59.157154  960431 main.go:141] libmachine: Making call to close connection to plugin binary
I0314 18:17:59.157164  960431 main.go:141] libmachine: Making call to close driver server
I0314 18:17:59.157172  960431 main.go:141] libmachine: (functional-059245) Calling .Close
I0314 18:17:59.157410  960431 main.go:141] libmachine: Successfully made call to close driver server
I0314 18:17:59.157423  960431 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListJson (0.31s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageListYaml
=== PAUSE TestFunctional/parallel/ImageCommands/ImageListYaml

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageListYaml
functional_test.go:260: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image ls --format yaml --alsologtostderr
functional_test.go:265: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059245 image ls --format yaml --alsologtostderr:
- id: 92b11f67642b62bbb98e7e49169c346b30e20cd3c1c034d31087e46924b9312e
repoDigests:
- docker.io/library/nginx@sha256:52478f8cd6a142fd462f0a7614a7bb064e969a4c083648235d6943c786df8cc7
- docker.io/library/nginx@sha256:6db391d1c0cfb30588ba0bf72ea999404f2764febf0f1f196acd5867ac7efa7e
repoTags:
- docker.io/library/nginx:latest
size: "190865876"
- id: 56cc512116c8f894f11ce1995460aef1ee0972d48bc2a3bdb1faaac7c020289c
repoDigests:
- gcr.io/k8s-minikube/busybox@sha256:2d03e6ceeb99250061dd110530b0ece7998cd84121f952adef120ea7c5a6f00e
- gcr.io/k8s-minikube/busybox@sha256:a85c92d5aa82aa6db0f92e5af591c2670a60a762da6bdfec52d960d55295f998
repoTags:
- gcr.io/k8s-minikube/busybox:1.28.4-glibc
size: "4631262"
- id: 3a4400069a619468db1fafc44adb4da0af735e2dd3b6059db66bf1cf94a75f1e
repoDigests:
- localhost/minikube-local-cache-test@sha256:906464e740e5765ca55e028df6c8c806dc074791490e973b4eb71691c669b2bc
repoTags:
- localhost/minikube-local-cache-test:functional-059245
size: "3345"
- id: 7fe0e6f37db33464725e616a12ccc4e36970370005a2b09683a974db6350c257
repoDigests:
- registry.k8s.io/kube-apiserver@sha256:3993d654a91d922a7ea098b2f4b3ff2853c200e3387c66c8a1e84f7222c85499
- registry.k8s.io/kube-apiserver@sha256:5b28a364467cf7e134343bb3ee2c6d40682b473a743a72142c7bbe25767d36eb
repoTags:
- registry.k8s.io/kube-apiserver:v1.28.4
size: "127226832"
- id: e3db313c6dbc065d4ac3b32c7a6f2a878949031b881d217b63881a109c5cfba1
repoDigests:
- registry.k8s.io/kube-scheduler@sha256:335bba9e861b88fa8b7bb9250bcd69b7a33f83da4fee93f9fc0eedc6f34e28ba
- registry.k8s.io/kube-scheduler@sha256:d994c8a78e8cb1ec189fabfd258ff002cccdeb63678fad08ec0fba32298ffe32
repoTags:
- registry.k8s.io/kube-scheduler:v1.28.4
size: "61551410"
- id: 350b164e7ae1dcddeffadd65c76226c9b6dc5553f5179153fb0e36b78f2a5e06
repoDigests:
- registry.k8s.io/pause@sha256:5bcb06ed43da4a16c6e6e33898eb0506e940bd66822659ecf0a898bbb0da7cb9
repoTags:
- registry.k8s.io/pause:latest
size: "247077"
- id: 115053965e86b2df4d78af78d7951b8644839d20a03820c6df59a261103315f7
repoDigests:
- docker.io/kubernetesui/metrics-scraper@sha256:43227e8286fd379ee0415a5e2156a9439c4056807e3caa38e1dd413b0644807a
- docker.io/kubernetesui/metrics-scraper@sha256:76049887f07a0476dc93efc2d3569b9529bf982b22d29f356092ce206e98765c
repoTags: []
size: "43824855"
- id: ead0a4a53df89fd173874b46093b6e62d8c72967bbf606d672c9e8c9b601a4fc
repoDigests:
- registry.k8s.io/coredns/coredns@sha256:a0ead06651cf580044aeb0a0feba63591858fb2e43ade8c9dea45a6a89ae7e5e
- registry.k8s.io/coredns/coredns@sha256:be7652ce0b43b1339f3d14d9b14af9f588578011092c1f7893bd55432d83a378
repoTags:
- registry.k8s.io/coredns/coredns:v1.10.1
size: "53621675"
- id: 73deb9a3f702532592a4167455f8bf2e5f5d900bcc959ba2fd2d35c321de1af9
repoDigests:
- registry.k8s.io/etcd@sha256:6c54bcd6cf6de7760c17ddfb31dd76f5ac64c5d8609d66829b542eb0b6b7ab15
- registry.k8s.io/etcd@sha256:e013d0d5e4e25d00c61a7ff839927a1f36479678f11e49502b53a5e0b14f10c3
repoTags:
- registry.k8s.io/etcd:3.5.9-0
size: "295456551"
- id: 83f6cc407eed88d214aad97f3539bde5c8e485ff14424cd021a3a2899304398e
repoDigests:
- registry.k8s.io/kube-proxy@sha256:b68e9ff5bed1103e0659277256d805ab9313c8b7856ee45d0d3eea0227760f7e
- registry.k8s.io/kube-proxy@sha256:e63408a0f5068a7e9d4b34fd72b4a2b0e5512509b53cd2123a37fc991b0ef532
repoTags:
- registry.k8s.io/kube-proxy:v1.28.4
size: "74749335"
- id: da86e6ba6ca197bf6bc5e9d900febd906b133eaa4750e6bed647b0fbe50ed43e
repoDigests:
- registry.k8s.io/pause@sha256:84805ddcaaae94434d8eacb7e843f549ec1da0cd277787b97ad9d9ac2cea929e
repoTags:
- registry.k8s.io/pause:3.1
size: "746911"
- id: ffd4cfbbe753e62419e129ee2ac618beb94e51baa7471df5038b0b516b59cf91
repoDigests:
- gcr.io/google-containers/addon-resizer@sha256:0ce7cf4876524f069adf654e4dd3c95fe4bfc889c8bbc03cd6ecd061d9392126
repoTags:
- gcr.io/google-containers/addon-resizer:functional-059245
size: "34114467"
- id: 5107333e08a87b836d48ff7528b1e84b9c86781cc9f1748bbc1b8c42a870d933
repoDigests:
- docker.io/library/mysql@sha256:4bc6bc963e6d8443453676cae56536f4b8156d78bae03c0145cbe47c2aad73bb
- docker.io/library/mysql@sha256:dab0a802b44617303694fb17d166501de279c3031ddeb28c56ecf7fcab5ef0da
repoTags:
- docker.io/library/mysql:5.7
size: "519571821"
- id: 6e38f40d628db3002f5617342c8872c935de530d867d0f709a2fbda1a302a562
repoDigests:
- gcr.io/k8s-minikube/storage-provisioner@sha256:18eb69d1418e854ad5a19e399310e52808a8321e4c441c1dddad8977a0d7a944
- gcr.io/k8s-minikube/storage-provisioner@sha256:c4c05d6ad6c0f24d87b39e596d4dddf64bec3e0d84f5b36e4511d4ebf583f38f
repoTags:
- gcr.io/k8s-minikube/storage-provisioner:v5
size: "31470524"
- id: 82e4c8a736a4fcf22b5ef9f6a4ff6207064c7187d7694bf97bd561605a538410
repoDigests:
- registry.k8s.io/echoserver@sha256:cb3386f863f6a4b05f33c191361723f9d5927ac287463b1bea633bf859475969
repoTags:
- registry.k8s.io/echoserver:1.8
size: "97846543"
- id: e6f1816883972d4be47bd48879a08919b96afcd344132622e4d444987919323c
repoDigests:
- registry.k8s.io/pause@sha256:7031c1b283388d2c2e09b57badb803c05ebed362dc88d84b480cc47f72a21097
- registry.k8s.io/pause@sha256:8d4106c88ec0bd28001e34c975d65175d994072d65341f62a8ab0754b0fafe10
repoTags:
- registry.k8s.io/pause:3.9
size: "750414"
- id: c7d1297425461d3e24fe0ba658818593be65d13a2dd45a4c02d8768d6c8c18cc
repoDigests:
- docker.io/kindest/kindnetd@sha256:4a58d1cd2b45bf2460762a51a4aa9c80861f460af35800c05baab0573f923052
- docker.io/kindest/kindnetd@sha256:a315b9c49a50d5e126e1b5fa5ef0eae2a9b367c9c4f868e897d772b142372bb4
repoTags:
- docker.io/kindest/kindnetd:v20230809-80a64d96
size: "65258016"
- id: d058aa5ab969ce7b84d25e7188be1f80633b18db8ea7d02d9d0a78e676236591
repoDigests:
- registry.k8s.io/kube-controller-manager@sha256:65486c8c338f96dc022dd1a0abe8763e38f35095b84b208c78f44d9e99447d1c
- registry.k8s.io/kube-controller-manager@sha256:c173b92b1ac1ac50de36a9d8d3af6377cbb7bbd930f42d4332cbaea521c57232
repoTags:
- registry.k8s.io/kube-controller-manager:v1.28.4
size: "123261750"
- id: 0184c1613d92931126feb4c548e5da11015513b9e4c104e7305ee8b53b50a9da
repoDigests:
- registry.k8s.io/pause@sha256:1000de19145c53d83aab989956fa8fca08dcbcc5b0208bdc193517905e6ccd04
repoTags:
- registry.k8s.io/pause:3.3
size: "686139"
- id: 07655ddf2eebe5d250f7a72c25f638b27126805d61779741b4e62e69ba080558
repoDigests:
- docker.io/kubernetesui/dashboard@sha256:2e500d29e9d5f4a086b908eb8dfe7ecac57d2ab09d65b24f588b1d449841ef93
- docker.io/kubernetesui/dashboard@sha256:ca93706ef4e400542202d620b8094a7e4e568ca9b1869c71b053cdf8b5dc3029
repoTags: []
size: "249229937"

                                                
                                                
functional_test.go:268: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059245 image ls --format yaml --alsologtostderr:
I0314 18:17:58.927603  960432 out.go:291] Setting OutFile to fd 1 ...
I0314 18:17:58.927762  960432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:17:58.927770  960432 out.go:304] Setting ErrFile to fd 2...
I0314 18:17:58.927778  960432 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:17:58.928171  960432 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
I0314 18:17:58.929128  960432 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0314 18:17:58.929246  960432 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0314 18:17:58.929765  960432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0314 18:17:58.929806  960432 main.go:141] libmachine: Launching plugin server for driver kvm2
I0314 18:17:58.945136  960432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:34093
I0314 18:17:58.945580  960432 main.go:141] libmachine: () Calling .GetVersion
I0314 18:17:58.946168  960432 main.go:141] libmachine: Using API Version  1
I0314 18:17:58.946190  960432 main.go:141] libmachine: () Calling .SetConfigRaw
I0314 18:17:58.946748  960432 main.go:141] libmachine: () Calling .GetMachineName
I0314 18:17:58.946955  960432 main.go:141] libmachine: (functional-059245) Calling .GetState
I0314 18:17:58.948878  960432 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0314 18:17:58.948912  960432 main.go:141] libmachine: Launching plugin server for driver kvm2
I0314 18:17:58.965393  960432 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41539
I0314 18:17:58.965748  960432 main.go:141] libmachine: () Calling .GetVersion
I0314 18:17:58.966229  960432 main.go:141] libmachine: Using API Version  1
I0314 18:17:58.966253  960432 main.go:141] libmachine: () Calling .SetConfigRaw
I0314 18:17:58.966668  960432 main.go:141] libmachine: () Calling .GetMachineName
I0314 18:17:58.966887  960432 main.go:141] libmachine: (functional-059245) Calling .DriverName
I0314 18:17:58.967150  960432 ssh_runner.go:195] Run: systemctl --version
I0314 18:17:58.967186  960432 main.go:141] libmachine: (functional-059245) Calling .GetSSHHostname
I0314 18:17:58.970122  960432 main.go:141] libmachine: (functional-059245) DBG | domain functional-059245 has defined MAC address 52:54:00:20:9c:6c in network mk-functional-059245
I0314 18:17:58.970719  960432 main.go:141] libmachine: (functional-059245) Calling .GetSSHPort
I0314 18:17:58.970799  960432 main.go:141] libmachine: (functional-059245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:9c:6c", ip: ""} in network mk-functional-059245: {Iface:virbr1 ExpiryTime:2024-03-14 19:14:25 +0000 UTC Type:0 Mac:52:54:00:20:9c:6c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-059245 Clientid:01:52:54:00:20:9c:6c}
I0314 18:17:58.970824  960432 main.go:141] libmachine: (functional-059245) DBG | domain functional-059245 has defined IP address 192.168.39.220 and MAC address 52:54:00:20:9c:6c in network mk-functional-059245
I0314 18:17:58.970889  960432 main.go:141] libmachine: (functional-059245) Calling .GetSSHKeyPath
I0314 18:17:58.971029  960432 main.go:141] libmachine: (functional-059245) Calling .GetSSHUsername
I0314 18:17:58.971152  960432 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/functional-059245/id_rsa Username:docker}
I0314 18:17:59.088416  960432 ssh_runner.go:195] Run: sudo crictl images --output json
I0314 18:17:59.192834  960432 main.go:141] libmachine: Making call to close driver server
I0314 18:17:59.192848  960432 main.go:141] libmachine: (functional-059245) Calling .Close
I0314 18:17:59.193145  960432 main.go:141] libmachine: Successfully made call to close driver server
I0314 18:17:59.193164  960432 main.go:141] libmachine: Making call to close connection to plugin binary
I0314 18:17:59.193161  960432 main.go:141] libmachine: (functional-059245) DBG | Closing plugin on server side
I0314 18:17:59.193182  960432 main.go:141] libmachine: Making call to close driver server
I0314 18:17:59.193192  960432 main.go:141] libmachine: (functional-059245) Calling .Close
I0314 18:17:59.193468  960432 main.go:141] libmachine: Successfully made call to close driver server
I0314 18:17:59.193486  960432 main.go:141] libmachine: Making call to close connection to plugin binary
--- PASS: TestFunctional/parallel/ImageCommands/ImageListYaml (0.35s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageBuild (2.32s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageBuild
=== PAUSE TestFunctional/parallel/ImageCommands/ImageBuild

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/ImageCommands/ImageBuild
functional_test.go:307: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 ssh pgrep buildkitd
functional_test.go:307: (dbg) Non-zero exit: out/minikube-linux-amd64 -p functional-059245 ssh pgrep buildkitd: exit status 1 (210.18854ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 1

                                                
                                                
** /stderr **
functional_test.go:314: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image build -t localhost/my-image:functional-059245 testdata/build --alsologtostderr
functional_test.go:314: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 image build -t localhost/my-image:functional-059245 testdata/build --alsologtostderr: (1.850915294s)
functional_test.go:319: (dbg) Stdout: out/minikube-linux-amd64 -p functional-059245 image build -t localhost/my-image:functional-059245 testdata/build --alsologtostderr:
STEP 1/3: FROM gcr.io/k8s-minikube/busybox
STEP 2/3: RUN true
--> fcafc712ace
STEP 3/3: ADD content.txt /
COMMIT localhost/my-image:functional-059245
--> 0e8d50f6363
Successfully tagged localhost/my-image:functional-059245
0e8d50f6363a8f10128982532d3741e76058c9d5365df9b03af92acab6f3c94b
functional_test.go:322: (dbg) Stderr: out/minikube-linux-amd64 -p functional-059245 image build -t localhost/my-image:functional-059245 testdata/build --alsologtostderr:
I0314 18:17:59.430181  960563 out.go:291] Setting OutFile to fd 1 ...
I0314 18:17:59.430328  960563 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:17:59.430340  960563 out.go:304] Setting ErrFile to fd 2...
I0314 18:17:59.430347  960563 out.go:338] TERM=,COLORTERM=, which probably does not support color
I0314 18:17:59.431005  960563 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
I0314 18:17:59.432283  960563 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0314 18:17:59.433011  960563 config.go:182] Loaded profile config "functional-059245": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
I0314 18:17:59.433376  960563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0314 18:17:59.433433  960563 main.go:141] libmachine: Launching plugin server for driver kvm2
I0314 18:17:59.448369  960563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:38093
I0314 18:17:59.448813  960563 main.go:141] libmachine: () Calling .GetVersion
I0314 18:17:59.449321  960563 main.go:141] libmachine: Using API Version  1
I0314 18:17:59.449343  960563 main.go:141] libmachine: () Calling .SetConfigRaw
I0314 18:17:59.449674  960563 main.go:141] libmachine: () Calling .GetMachineName
I0314 18:17:59.449845  960563 main.go:141] libmachine: (functional-059245) Calling .GetState
I0314 18:17:59.451826  960563 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
I0314 18:17:59.451862  960563 main.go:141] libmachine: Launching plugin server for driver kvm2
I0314 18:17:59.466093  960563 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:32961
I0314 18:17:59.466469  960563 main.go:141] libmachine: () Calling .GetVersion
I0314 18:17:59.466887  960563 main.go:141] libmachine: Using API Version  1
I0314 18:17:59.466911  960563 main.go:141] libmachine: () Calling .SetConfigRaw
I0314 18:17:59.467214  960563 main.go:141] libmachine: () Calling .GetMachineName
I0314 18:17:59.467399  960563 main.go:141] libmachine: (functional-059245) Calling .DriverName
I0314 18:17:59.467602  960563 ssh_runner.go:195] Run: systemctl --version
I0314 18:17:59.467625  960563 main.go:141] libmachine: (functional-059245) Calling .GetSSHHostname
I0314 18:17:59.470202  960563 main.go:141] libmachine: (functional-059245) DBG | domain functional-059245 has defined MAC address 52:54:00:20:9c:6c in network mk-functional-059245
I0314 18:17:59.470549  960563 main.go:141] libmachine: (functional-059245) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:20:9c:6c", ip: ""} in network mk-functional-059245: {Iface:virbr1 ExpiryTime:2024-03-14 19:14:25 +0000 UTC Type:0 Mac:52:54:00:20:9c:6c Iaid: IPaddr:192.168.39.220 Prefix:24 Hostname:functional-059245 Clientid:01:52:54:00:20:9c:6c}
I0314 18:17:59.470587  960563 main.go:141] libmachine: (functional-059245) DBG | domain functional-059245 has defined IP address 192.168.39.220 and MAC address 52:54:00:20:9c:6c in network mk-functional-059245
I0314 18:17:59.470736  960563 main.go:141] libmachine: (functional-059245) Calling .GetSSHPort
I0314 18:17:59.470891  960563 main.go:141] libmachine: (functional-059245) Calling .GetSSHKeyPath
I0314 18:17:59.470996  960563 main.go:141] libmachine: (functional-059245) Calling .GetSSHUsername
I0314 18:17:59.471105  960563 sshutil.go:53] new ssh client: &{IP:192.168.39.220 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/functional-059245/id_rsa Username:docker}
I0314 18:17:59.555204  960563 build_images.go:161] Building image from path: /tmp/build.3129665300.tar
I0314 18:17:59.555254  960563 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build
I0314 18:17:59.566096  960563 ssh_runner.go:195] Run: stat -c "%s %y" /var/lib/minikube/build/build.3129665300.tar
I0314 18:17:59.571146  960563 ssh_runner.go:352] existence check for /var/lib/minikube/build/build.3129665300.tar: stat -c "%s %y" /var/lib/minikube/build/build.3129665300.tar: Process exited with status 1
stdout:

                                                
                                                
stderr:
stat: cannot statx '/var/lib/minikube/build/build.3129665300.tar': No such file or directory
I0314 18:17:59.571170  960563 ssh_runner.go:362] scp /tmp/build.3129665300.tar --> /var/lib/minikube/build/build.3129665300.tar (3072 bytes)
I0314 18:17:59.601080  960563 ssh_runner.go:195] Run: sudo mkdir -p /var/lib/minikube/build/build.3129665300
I0314 18:17:59.611383  960563 ssh_runner.go:195] Run: sudo tar -C /var/lib/minikube/build/build.3129665300 -xf /var/lib/minikube/build/build.3129665300.tar
I0314 18:17:59.622236  960563 crio.go:297] Building image: /var/lib/minikube/build/build.3129665300
I0314 18:17:59.622295  960563 ssh_runner.go:195] Run: sudo podman build -t localhost/my-image:functional-059245 /var/lib/minikube/build/build.3129665300 --cgroup-manager=cgroupfs
Trying to pull gcr.io/k8s-minikube/busybox:latest...
Getting image source signatures
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying blob sha256:5cc84ad355aaa64f46ea9c7bbcc319a9d808ab15088a27209c9e70ef86e5a2aa
Copying config sha256:beae173ccac6ad749f76713cf4440fe3d21d1043fe616dfbe30775815d1d0f6a
Writing manifest to image destination
Storing signatures
I0314 18:18:01.188840  960563 ssh_runner.go:235] Completed: sudo podman build -t localhost/my-image:functional-059245 /var/lib/minikube/build/build.3129665300 --cgroup-manager=cgroupfs: (1.566510513s)
I0314 18:18:01.188921  960563 ssh_runner.go:195] Run: sudo rm -rf /var/lib/minikube/build/build.3129665300
I0314 18:18:01.202602  960563 ssh_runner.go:195] Run: sudo rm -f /var/lib/minikube/build/build.3129665300.tar
I0314 18:18:01.213737  960563 build_images.go:217] Built localhost/my-image:functional-059245 from /tmp/build.3129665300.tar
I0314 18:18:01.213772  960563 build_images.go:133] succeeded building to: functional-059245
I0314 18:18:01.213776  960563 build_images.go:134] failed building to: 
I0314 18:18:01.213801  960563 main.go:141] libmachine: Making call to close driver server
I0314 18:18:01.213813  960563 main.go:141] libmachine: (functional-059245) Calling .Close
I0314 18:18:01.214143  960563 main.go:141] libmachine: Successfully made call to close driver server
I0314 18:18:01.214167  960563 main.go:141] libmachine: Making call to close connection to plugin binary
I0314 18:18:01.214176  960563 main.go:141] libmachine: (functional-059245) DBG | Closing plugin on server side
I0314 18:18:01.214183  960563 main.go:141] libmachine: Making call to close driver server
I0314 18:18:01.214220  960563 main.go:141] libmachine: (functional-059245) Calling .Close
I0314 18:18:01.214523  960563 main.go:141] libmachine: Successfully made call to close driver server
I0314 18:18:01.214565  960563 main.go:141] libmachine: (functional-059245) DBG | Closing plugin on server side
I0314 18:18:01.214580  960563 main.go:141] libmachine: Making call to close connection to plugin binary
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageBuild (2.32s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/Setup (1.19s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/Setup
functional_test.go:341: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:341: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.8: (1.163095597s)
functional_test.go:346: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.8 gcr.io/google-containers/addon-resizer:functional-059245
--- PASS: TestFunctional/parallel/ImageCommands/Setup (1.19s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.2s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageReloadDaemon
functional_test.go:364: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image load --daemon gcr.io/google-containers/addon-resizer:functional-059245 --alsologtostderr
functional_test.go:364: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 image load --daemon gcr.io/google-containers/addon-resizer:functional-059245 --alsologtostderr: (2.861441425s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageReloadDaemon (3.20s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.35s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon
functional_test.go:234: (dbg) Run:  docker pull gcr.io/google-containers/addon-resizer:1.8.9
functional_test.go:234: (dbg) Done: docker pull gcr.io/google-containers/addon-resizer:1.8.9: (1.101250391s)
functional_test.go:239: (dbg) Run:  docker tag gcr.io/google-containers/addon-resizer:1.8.9 gcr.io/google-containers/addon-resizer:functional-059245
functional_test.go:244: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image load --daemon gcr.io/google-containers/addon-resizer:functional-059245 --alsologtostderr
functional_test.go:244: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 image load --daemon gcr.io/google-containers/addon-resizer:functional-059245 --alsologtostderr: (8.969269334s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageTagAndLoadDaemon (10.35s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_changes
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_changes

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_changes
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_changes (0.13s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_minikube_cluster (0.11s)

                                                
                                    
x
+
TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                                
=== RUN   TestFunctional/parallel/UpdateContextCmd/no_clusters
=== PAUSE TestFunctional/parallel/UpdateContextCmd/no_clusters

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/UpdateContextCmd/no_clusters
functional_test.go:2115: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 update-context --alsologtostderr -v=2
--- PASS: TestFunctional/parallel/UpdateContextCmd/no_clusters (0.12s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveToFile
functional_test.go:379: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image save gcr.io/google-containers/addon-resizer:functional-059245 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:379: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 image save gcr.io/google-containers/addon-resizer:functional-059245 /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.278575695s)
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveToFile (1.28s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageRemove
functional_test.go:391: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image rm gcr.io/google-containers/addon-resizer:functional-059245 --alsologtostderr
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image ls
E0314 18:17:55.819337  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
--- PASS: TestFunctional/parallel/ImageCommands/ImageRemove (0.61s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.78s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageLoadFromFile
functional_test.go:408: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr
functional_test.go:408: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 image load /home/jenkins/workspace/KVM_Linux_crio_integration/addon-resizer-save.tar --alsologtostderr: (1.539347507s)
functional_test.go:447: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image ls
--- PASS: TestFunctional/parallel/ImageCommands/ImageLoadFromFile (1.78s)

                                                
                                    
x
+
TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.16s)

                                                
                                                
=== RUN   TestFunctional/parallel/ImageCommands/ImageSaveDaemon
functional_test.go:418: (dbg) Run:  docker rmi gcr.io/google-containers/addon-resizer:functional-059245
functional_test.go:423: (dbg) Run:  out/minikube-linux-amd64 -p functional-059245 image save --daemon gcr.io/google-containers/addon-resizer:functional-059245 --alsologtostderr
functional_test.go:423: (dbg) Done: out/minikube-linux-amd64 -p functional-059245 image save --daemon gcr.io/google-containers/addon-resizer:functional-059245 --alsologtostderr: (1.123430221s)
functional_test.go:428: (dbg) Run:  docker image inspect gcr.io/google-containers/addon-resizer:functional-059245
--- PASS: TestFunctional/parallel/ImageCommands/ImageSaveDaemon (1.16s)

                                                
                                    
x
+
TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                                
=== RUN   TestFunctional/delete_addon-resizer_images
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:1.8.8
functional_test.go:189: (dbg) Run:  docker rmi -f gcr.io/google-containers/addon-resizer:functional-059245
--- PASS: TestFunctional/delete_addon-resizer_images (0.07s)

                                                
                                    
x
+
TestFunctional/delete_my-image_image (0.01s)

                                                
                                                
=== RUN   TestFunctional/delete_my-image_image
functional_test.go:197: (dbg) Run:  docker rmi -f localhost/my-image:functional-059245
--- PASS: TestFunctional/delete_my-image_image (0.01s)

                                                
                                    
x
+
TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                                
=== RUN   TestFunctional/delete_minikube_cached_images
functional_test.go:205: (dbg) Run:  docker rmi -f minikube-local-cache-test:functional-059245
--- PASS: TestFunctional/delete_minikube_cached_images (0.02s)

                                                
                                    
x
+
TestMutliControlPlane/serial/StartCluster (211.63s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/StartCluster
ha_test.go:101: (dbg) Run:  out/minikube-linux-amd64 start -p ha-105786 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0314 18:18:36.780052  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
E0314 18:19:58.702471  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
ha_test.go:101: (dbg) Done: out/minikube-linux-amd64 start -p ha-105786 --wait=true --memory=2200 --ha -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (3m30.921557264s)
ha_test.go:107: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/StartCluster (211.63s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DeployApp (5.36s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DeployApp
ha_test.go:128: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- apply -f ./testdata/ha/ha-pod-dns-test.yaml
ha_test.go:133: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- rollout status deployment/busybox
ha_test.go:133: (dbg) Done: out/minikube-linux-amd64 kubectl -p ha-105786 -- rollout status deployment/busybox: (2.736014182s)
ha_test.go:140: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- get pods -o jsonpath='{.items[*].status.podIP}'
ha_test.go:163: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-4h99c -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-g4zv5 -- nslookup kubernetes.io
ha_test.go:171: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-k6gxp -- nslookup kubernetes.io
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-4h99c -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-g4zv5 -- nslookup kubernetes.default
ha_test.go:181: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-k6gxp -- nslookup kubernetes.default
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-4h99c -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-g4zv5 -- nslookup kubernetes.default.svc.cluster.local
ha_test.go:189: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-k6gxp -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMutliControlPlane/serial/DeployApp (5.36s)

                                                
                                    
x
+
TestMutliControlPlane/serial/PingHostFromPods (1.52s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/PingHostFromPods
ha_test.go:199: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- get pods -o jsonpath='{.items[*].metadata.name}'
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-4h99c -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-4h99c -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-g4zv5 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-g4zv5 -- sh -c "ping -c 1 192.168.39.1"
ha_test.go:207: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-k6gxp -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
ha_test.go:218: (dbg) Run:  out/minikube-linux-amd64 kubectl -p ha-105786 -- exec busybox-5b5d89c9d6-k6gxp -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMutliControlPlane/serial/PingHostFromPods (1.52s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddWorkerNode (45.11s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddWorkerNode
ha_test.go:228: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-105786 -v=7 --alsologtostderr
E0314 18:22:14.528624  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:22:14.534002  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:22:14.544292  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:22:14.564618  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:22:14.604973  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:22:14.685322  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:22:14.846272  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:22:14.854548  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
E0314 18:22:15.167020  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:22:15.808269  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:22:17.089359  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:22:19.650036  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:22:24.770460  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
ha_test.go:228: (dbg) Done: out/minikube-linux-amd64 node add -p ha-105786 -v=7 --alsologtostderr: (44.2172647s)
ha_test.go:234: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddWorkerNode (45.11s)

                                                
                                    
x
+
TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/NodeLabels
ha_test.go:255: (dbg) Run:  kubectl --context ha-105786 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMutliControlPlane/serial/NodeLabels (0.07s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterClusterStart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterClusterStart (0.56s)

                                                
                                    
x
+
TestMutliControlPlane/serial/CopyFile (14.12s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/CopyFile
ha_test.go:326: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status --output json -v=7 --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp testdata/cp-test.txt ha-105786:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3116594682/001/cp-test_ha-105786.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786:/home/docker/cp-test.txt ha-105786-m02:/home/docker/cp-test_ha-105786_ha-105786-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m02 "sudo cat /home/docker/cp-test_ha-105786_ha-105786-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786:/home/docker/cp-test.txt ha-105786-m03:/home/docker/cp-test_ha-105786_ha-105786-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m03 "sudo cat /home/docker/cp-test_ha-105786_ha-105786-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786:/home/docker/cp-test.txt ha-105786-m04:/home/docker/cp-test_ha-105786_ha-105786-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m04 "sudo cat /home/docker/cp-test_ha-105786_ha-105786-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp testdata/cp-test.txt ha-105786-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m02:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3116594682/001/cp-test_ha-105786-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m02:/home/docker/cp-test.txt ha-105786:/home/docker/cp-test_ha-105786-m02_ha-105786.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786 "sudo cat /home/docker/cp-test_ha-105786-m02_ha-105786.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m02:/home/docker/cp-test.txt ha-105786-m03:/home/docker/cp-test_ha-105786-m02_ha-105786-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m03 "sudo cat /home/docker/cp-test_ha-105786-m02_ha-105786-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m02:/home/docker/cp-test.txt ha-105786-m04:/home/docker/cp-test_ha-105786-m02_ha-105786-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m04 "sudo cat /home/docker/cp-test_ha-105786-m02_ha-105786-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp testdata/cp-test.txt ha-105786-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m03 "sudo cat /home/docker/cp-test.txt"
E0314 18:22:35.011575  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3116594682/001/cp-test_ha-105786-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt ha-105786:/home/docker/cp-test_ha-105786-m03_ha-105786.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786 "sudo cat /home/docker/cp-test_ha-105786-m03_ha-105786.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt ha-105786-m02:/home/docker/cp-test_ha-105786-m03_ha-105786-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m02 "sudo cat /home/docker/cp-test_ha-105786-m03_ha-105786-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m03:/home/docker/cp-test.txt ha-105786-m04:/home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m04 "sudo cat /home/docker/cp-test_ha-105786-m03_ha-105786-m04.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp testdata/cp-test.txt ha-105786-m04:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt /tmp/TestMutliControlPlaneserialCopyFile3116594682/001/cp-test_ha-105786-m04.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt ha-105786:/home/docker/cp-test_ha-105786-m04_ha-105786.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786 "sudo cat /home/docker/cp-test_ha-105786-m04_ha-105786.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt ha-105786-m02:/home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m02 "sudo cat /home/docker/cp-test_ha-105786-m04_ha-105786-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 cp ha-105786-m04:/home/docker/cp-test.txt ha-105786-m03:/home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m04 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 ssh -n ha-105786-m03 "sudo cat /home/docker/cp-test_ha-105786-m04_ha-105786-m03.txt"
--- PASS: TestMutliControlPlane/serial/CopyFile (14.12s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.5s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
ha_test.go:390: (dbg) Done: out/minikube-linux-amd64 profile list --output json: (3.503906163s)
--- PASS: TestMutliControlPlane/serial/DegradedAfterControlPlaneNodeStop (3.50s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeRestart (0.41s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.41s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterSecondaryNodeDelete (0.41s)

                                                
                                    
x
+
TestMutliControlPlane/serial/RestartCluster (291.46s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/RestartCluster
ha_test.go:560: (dbg) Run:  out/minikube-linux-amd64 start -p ha-105786 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0314 18:37:14.528350  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:37:14.853781  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
E0314 18:38:37.575741  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
ha_test.go:560: (dbg) Done: out/minikube-linux-amd64 start -p ha-105786 --wait=true -v=7 --alsologtostderr --driver=kvm2  --container-runtime=crio: (4m50.665594161s)
ha_test.go:566: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
ha_test.go:584: (dbg) Run:  kubectl get nodes
ha_test.go:592: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMutliControlPlane/serial/RestartCluster (291.46s)

                                                
                                    
x
+
TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/DegradedAfterClusterRestart
ha_test.go:390: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/DegradedAfterClusterRestart (0.39s)

                                                
                                    
x
+
TestMutliControlPlane/serial/AddSecondaryNode (75.82s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/AddSecondaryNode
ha_test.go:605: (dbg) Run:  out/minikube-linux-amd64 node add -p ha-105786 --control-plane -v=7 --alsologtostderr
ha_test.go:605: (dbg) Done: out/minikube-linux-amd64 node add -p ha-105786 --control-plane -v=7 --alsologtostderr: (1m14.938922236s)
ha_test.go:611: (dbg) Run:  out/minikube-linux-amd64 -p ha-105786 status -v=7 --alsologtostderr
--- PASS: TestMutliControlPlane/serial/AddSecondaryNode (75.82s)

                                                
                                    
x
+
TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                                
=== RUN   TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd
ha_test.go:281: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMutliControlPlane/serial/HAppyAfterSecondaryNodeAdd (0.56s)

                                                
                                    
x
+
TestJSONOutput/start/Command (99.12s)

                                                
                                                
=== RUN   TestJSONOutput/start/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-191425 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio
E0314 18:42:14.528367  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:42:14.853898  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 start -p json-output-191425 --output=json --user=testUser --memory=2200 --wait=true --driver=kvm2  --container-runtime=crio: (1m39.123633426s)
--- PASS: TestJSONOutput/start/Command (99.12s)

                                                
                                    
x
+
TestJSONOutput/start/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/Audit
--- PASS: TestJSONOutput/start/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/start/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/start/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/start/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/start/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/start/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/start/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/Command (0.76s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 pause -p json-output-191425 --output=json --user=testUser
--- PASS: TestJSONOutput/pause/Command (0.76s)

                                                
                                    
x
+
TestJSONOutput/pause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/Audit
--- PASS: TestJSONOutput/pause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/pause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/pause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/pause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/pause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/Command (0.66s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 unpause -p json-output-191425 --output=json --user=testUser
--- PASS: TestJSONOutput/unpause/Command (0.66s)

                                                
                                    
x
+
TestJSONOutput/unpause/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/Audit
--- PASS: TestJSONOutput/unpause/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/unpause/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/unpause/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/unpause/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/Command (7.4s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Command
json_output_test.go:63: (dbg) Run:  out/minikube-linux-amd64 stop -p json-output-191425 --output=json --user=testUser
json_output_test.go:63: (dbg) Done: out/minikube-linux-amd64 stop -p json-output-191425 --output=json --user=testUser: (7.40323111s)
--- PASS: TestJSONOutput/stop/Command (7.40s)

                                                
                                    
x
+
TestJSONOutput/stop/Audit (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/Audit
--- PASS: TestJSONOutput/stop/Audit (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/DistinctCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/DistinctCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/DistinctCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/DistinctCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/DistinctCurrentSteps (0.00s)

                                                
                                    
x
+
TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0s)

                                                
                                                
=== RUN   TestJSONOutput/stop/parallel/IncreasingCurrentSteps
=== PAUSE TestJSONOutput/stop/parallel/IncreasingCurrentSteps

                                                
                                                

                                                
                                                
=== CONT  TestJSONOutput/stop/parallel/IncreasingCurrentSteps
--- PASS: TestJSONOutput/stop/parallel/IncreasingCurrentSteps (0.00s)

                                                
                                    
x
+
TestErrorJSONOutput (0.22s)

                                                
                                                
=== RUN   TestErrorJSONOutput
json_output_test.go:160: (dbg) Run:  out/minikube-linux-amd64 start -p json-output-error-445339 --memory=2200 --output=json --wait=true --driver=fail
json_output_test.go:160: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p json-output-error-445339 --memory=2200 --output=json --wait=true --driver=fail: exit status 56 (77.291524ms)

                                                
                                                
-- stdout --
	{"specversion":"1.0","id":"df9c4aa6-094f-4d4d-b2b0-1f4b5bfb8f84","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.step","datacontenttype":"application/json","data":{"currentstep":"0","message":"[json-output-error-445339] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)","name":"Initial Minikube Setup","totalsteps":"19"}}
	{"specversion":"1.0","id":"163a61b3-5f04-45b3-bc66-56802c975180","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_LOCATION=18384"}}
	{"specversion":"1.0","id":"bd03c04c-ab68-45e5-9378-755cce1565e8","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true"}}
	{"specversion":"1.0","id":"d9f25d59-2fd1-491f-9c8b-cb40cc1b2600","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig"}}
	{"specversion":"1.0","id":"28c79693-e5e6-405e-be54-1d6365b924fc","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube"}}
	{"specversion":"1.0","id":"ff8d8baa-a74e-436f-8922-084e659a1485","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_BIN=out/minikube-linux-amd64"}}
	{"specversion":"1.0","id":"256c5092-8a9e-4aa4-9f5d-35a08e4d331e","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.info","datacontenttype":"application/json","data":{"message":"MINIKUBE_FORCE_SYSTEMD="}}
	{"specversion":"1.0","id":"33df0266-142c-440b-b217-dc744d4b8fe2","source":"https://minikube.sigs.k8s.io/","type":"io.k8s.sigs.minikube.error","datacontenttype":"application/json","data":{"advice":"","exitcode":"56","issues":"","message":"The driver 'fail' is not supported on linux/amd64","name":"DRV_UNSUPPORTED_OS","url":""}}

                                                
                                                
-- /stdout --
helpers_test.go:175: Cleaning up "json-output-error-445339" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p json-output-error-445339
--- PASS: TestErrorJSONOutput (0.22s)

                                                
                                    
x
+
TestMainNoArgs (0.06s)

                                                
                                                
=== RUN   TestMainNoArgs
main_test.go:68: (dbg) Run:  out/minikube-linux-amd64
--- PASS: TestMainNoArgs (0.06s)

                                                
                                    
x
+
TestMinikubeProfile (97.55s)

                                                
                                                
=== RUN   TestMinikubeProfile
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p first-059672 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p first-059672 --driver=kvm2  --container-runtime=crio: (45.5578792s)
minikube_profile_test.go:44: (dbg) Run:  out/minikube-linux-amd64 start -p second-063084 --driver=kvm2  --container-runtime=crio
minikube_profile_test.go:44: (dbg) Done: out/minikube-linux-amd64 start -p second-063084 --driver=kvm2  --container-runtime=crio: (49.182758762s)
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile first-059672
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
minikube_profile_test.go:51: (dbg) Run:  out/minikube-linux-amd64 profile second-063084
minikube_profile_test.go:55: (dbg) Run:  out/minikube-linux-amd64 profile list -ojson
helpers_test.go:175: Cleaning up "second-063084" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p second-063084
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p second-063084: (1.016268801s)
helpers_test.go:175: Cleaning up "first-059672" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p first-059672
--- PASS: TestMinikubeProfile (97.55s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountFirst (27.57s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountFirst
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-1-082221 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-1-082221 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46464 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (26.574689662s)
--- PASS: TestMountStart/serial/StartWithMountFirst (27.57s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountFirst (0.4s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountFirst
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-082221 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-1-082221 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountFirst (0.40s)

                                                
                                    
x
+
TestMountStart/serial/StartWithMountSecond (28.21s)

                                                
                                                
=== RUN   TestMountStart/serial/StartWithMountSecond
mount_start_test.go:98: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-097963 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio
mount_start_test.go:98: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-097963 --memory=2048 --mount --mount-gid 0 --mount-msize 6543 --mount-port 46465 --mount-uid 0 --no-kubernetes --driver=kvm2  --container-runtime=crio: (27.213660664s)
--- PASS: TestMountStart/serial/StartWithMountSecond (28.21s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountSecond
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097963 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097963 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountSecond (0.42s)

                                                
                                    
x
+
TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                                
=== RUN   TestMountStart/serial/DeleteFirst
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p mount-start-1-082221 --alsologtostderr -v=5
--- PASS: TestMountStart/serial/DeleteFirst (0.69s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostDelete
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097963 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097963 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostDelete (0.41s)

                                                
                                    
x
+
TestMountStart/serial/Stop (1.38s)

                                                
                                                
=== RUN   TestMountStart/serial/Stop
mount_start_test.go:155: (dbg) Run:  out/minikube-linux-amd64 stop -p mount-start-2-097963
mount_start_test.go:155: (dbg) Done: out/minikube-linux-amd64 stop -p mount-start-2-097963: (1.376969383s)
--- PASS: TestMountStart/serial/Stop (1.38s)

                                                
                                    
x
+
TestMountStart/serial/RestartStopped (21.87s)

                                                
                                                
=== RUN   TestMountStart/serial/RestartStopped
mount_start_test.go:166: (dbg) Run:  out/minikube-linux-amd64 start -p mount-start-2-097963
mount_start_test.go:166: (dbg) Done: out/minikube-linux-amd64 start -p mount-start-2-097963: (20.868534171s)
--- PASS: TestMountStart/serial/RestartStopped (21.87s)

                                                
                                    
x
+
TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                                
=== RUN   TestMountStart/serial/VerifyMountPostStop
mount_start_test.go:114: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097963 ssh -- ls /minikube-host
mount_start_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p mount-start-2-097963 ssh -- mount | grep 9p
--- PASS: TestMountStart/serial/VerifyMountPostStop (0.42s)

                                                
                                    
x
+
TestMultiNode/serial/FreshStart2Nodes (104.85s)

                                                
                                                
=== RUN   TestMultiNode/serial/FreshStart2Nodes
multinode_test.go:96: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-669543 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
E0314 18:47:14.528030  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 18:47:14.854215  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
multinode_test.go:96: (dbg) Done: out/minikube-linux-amd64 start -p multinode-669543 --wait=true --memory=2200 --nodes=2 -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (1m44.428415425s)
multinode_test.go:102: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 status --alsologtostderr
--- PASS: TestMultiNode/serial/FreshStart2Nodes (104.85s)

                                                
                                    
x
+
TestMultiNode/serial/DeployApp2Nodes (4.37s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeployApp2Nodes
multinode_test.go:493: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- apply -f ./testdata/multinodes/multinode-pod-dns-test.yaml
multinode_test.go:498: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- rollout status deployment/busybox
multinode_test.go:498: (dbg) Done: out/minikube-linux-amd64 kubectl -p multinode-669543 -- rollout status deployment/busybox: (2.620592624s)
multinode_test.go:505: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- get pods -o jsonpath='{.items[*].status.podIP}'
multinode_test.go:528: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- exec busybox-5b5d89c9d6-nslm6 -- nslookup kubernetes.io
multinode_test.go:536: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- exec busybox-5b5d89c9d6-wdd4q -- nslookup kubernetes.io
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- exec busybox-5b5d89c9d6-nslm6 -- nslookup kubernetes.default
multinode_test.go:546: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- exec busybox-5b5d89c9d6-wdd4q -- nslookup kubernetes.default
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- exec busybox-5b5d89c9d6-nslm6 -- nslookup kubernetes.default.svc.cluster.local
multinode_test.go:554: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- exec busybox-5b5d89c9d6-wdd4q -- nslookup kubernetes.default.svc.cluster.local
--- PASS: TestMultiNode/serial/DeployApp2Nodes (4.37s)

                                                
                                    
x
+
TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                                
=== RUN   TestMultiNode/serial/PingHostFrom2Pods
multinode_test.go:564: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- get pods -o jsonpath='{.items[*].metadata.name}'
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- exec busybox-5b5d89c9d6-nslm6 -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- exec busybox-5b5d89c9d6-nslm6 -- sh -c "ping -c 1 192.168.39.1"
multinode_test.go:572: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- exec busybox-5b5d89c9d6-wdd4q -- sh -c "nslookup host.minikube.internal | awk 'NR==5' | cut -d' ' -f3"
multinode_test.go:583: (dbg) Run:  out/minikube-linux-amd64 kubectl -p multinode-669543 -- exec busybox-5b5d89c9d6-wdd4q -- sh -c "ping -c 1 192.168.39.1"
--- PASS: TestMultiNode/serial/PingHostFrom2Pods (0.97s)

                                                
                                    
x
+
TestMultiNode/serial/AddNode (39.87s)

                                                
                                                
=== RUN   TestMultiNode/serial/AddNode
multinode_test.go:121: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-669543 -v 3 --alsologtostderr
multinode_test.go:121: (dbg) Done: out/minikube-linux-amd64 node add -p multinode-669543 -v 3 --alsologtostderr: (39.283767838s)
multinode_test.go:127: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 status --alsologtostderr
--- PASS: TestMultiNode/serial/AddNode (39.87s)

                                                
                                    
x
+
TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                                
=== RUN   TestMultiNode/serial/MultiNodeLabels
multinode_test.go:221: (dbg) Run:  kubectl --context multinode-669543 get nodes -o "jsonpath=[{range .items[*]}{.metadata.labels},{end}]"
--- PASS: TestMultiNode/serial/MultiNodeLabels (0.06s)

                                                
                                    
x
+
TestMultiNode/serial/ProfileList (0.23s)

                                                
                                                
=== RUN   TestMultiNode/serial/ProfileList
multinode_test.go:143: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestMultiNode/serial/ProfileList (0.23s)

                                                
                                    
x
+
TestMultiNode/serial/CopyFile (7.88s)

                                                
                                                
=== RUN   TestMultiNode/serial/CopyFile
multinode_test.go:184: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 status --output json --alsologtostderr
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp testdata/cp-test.txt multinode-669543:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp multinode-669543:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1246053536/001/cp-test_multinode-669543.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp multinode-669543:/home/docker/cp-test.txt multinode-669543-m02:/home/docker/cp-test_multinode-669543_multinode-669543-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m02 "sudo cat /home/docker/cp-test_multinode-669543_multinode-669543-m02.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp multinode-669543:/home/docker/cp-test.txt multinode-669543-m03:/home/docker/cp-test_multinode-669543_multinode-669543-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m03 "sudo cat /home/docker/cp-test_multinode-669543_multinode-669543-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp testdata/cp-test.txt multinode-669543-m02:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp multinode-669543-m02:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1246053536/001/cp-test_multinode-669543-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp multinode-669543-m02:/home/docker/cp-test.txt multinode-669543:/home/docker/cp-test_multinode-669543-m02_multinode-669543.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543 "sudo cat /home/docker/cp-test_multinode-669543-m02_multinode-669543.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp multinode-669543-m02:/home/docker/cp-test.txt multinode-669543-m03:/home/docker/cp-test_multinode-669543-m02_multinode-669543-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m02 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m03 "sudo cat /home/docker/cp-test_multinode-669543-m02_multinode-669543-m03.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp testdata/cp-test.txt multinode-669543-m03:/home/docker/cp-test.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp multinode-669543-m03:/home/docker/cp-test.txt /tmp/TestMultiNodeserialCopyFile1246053536/001/cp-test_multinode-669543-m03.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp multinode-669543-m03:/home/docker/cp-test.txt multinode-669543:/home/docker/cp-test_multinode-669543-m03_multinode-669543.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543 "sudo cat /home/docker/cp-test_multinode-669543-m03_multinode-669543.txt"
helpers_test.go:556: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 cp multinode-669543-m03:/home/docker/cp-test.txt multinode-669543-m02:/home/docker/cp-test_multinode-669543-m03_multinode-669543-m02.txt
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m03 "sudo cat /home/docker/cp-test.txt"
helpers_test.go:534: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 ssh -n multinode-669543-m02 "sudo cat /home/docker/cp-test_multinode-669543-m03_multinode-669543-m02.txt"
--- PASS: TestMultiNode/serial/CopyFile (7.88s)

                                                
                                    
x
+
TestMultiNode/serial/StopNode (3.2s)

                                                
                                                
=== RUN   TestMultiNode/serial/StopNode
multinode_test.go:248: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 node stop m03
multinode_test.go:248: (dbg) Done: out/minikube-linux-amd64 -p multinode-669543 node stop m03: (2.293322959s)
multinode_test.go:254: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 status
multinode_test.go:254: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-669543 status: exit status 7 (449.144359ms)

                                                
                                                
-- stdout --
	multinode-669543
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-669543-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-669543-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
multinode_test.go:261: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 status --alsologtostderr
multinode_test.go:261: (dbg) Non-zero exit: out/minikube-linux-amd64 -p multinode-669543 status --alsologtostderr: exit status 7 (455.133059ms)

                                                
                                                
-- stdout --
	multinode-669543
	type: Control Plane
	host: Running
	kubelet: Running
	apiserver: Running
	kubeconfig: Configured
	
	multinode-669543-m02
	type: Worker
	host: Running
	kubelet: Running
	
	multinode-669543-m03
	type: Worker
	host: Stopped
	kubelet: Stopped
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 18:49:23.675134  975780 out.go:291] Setting OutFile to fd 1 ...
	I0314 18:49:23.675398  975780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:49:23.675407  975780 out.go:304] Setting ErrFile to fd 2...
	I0314 18:49:23.675411  975780 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 18:49:23.675606  975780 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 18:49:23.675779  975780 out.go:298] Setting JSON to false
	I0314 18:49:23.675807  975780 mustload.go:65] Loading cluster: multinode-669543
	I0314 18:49:23.675943  975780 notify.go:220] Checking for updates...
	I0314 18:49:23.676221  975780 config.go:182] Loaded profile config "multinode-669543": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 18:49:23.676244  975780 status.go:255] checking status of multinode-669543 ...
	I0314 18:49:23.676713  975780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:49:23.676797  975780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:49:23.696145  975780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:40583
	I0314 18:49:23.696542  975780 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:49:23.697019  975780 main.go:141] libmachine: Using API Version  1
	I0314 18:49:23.697054  975780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:49:23.697472  975780 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:49:23.697684  975780 main.go:141] libmachine: (multinode-669543) Calling .GetState
	I0314 18:49:23.699245  975780 status.go:330] multinode-669543 host status = "Running" (err=<nil>)
	I0314 18:49:23.699263  975780 host.go:66] Checking if "multinode-669543" exists ...
	I0314 18:49:23.699543  975780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:49:23.699578  975780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:49:23.714155  975780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:46753
	I0314 18:49:23.714544  975780 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:49:23.714963  975780 main.go:141] libmachine: Using API Version  1
	I0314 18:49:23.714987  975780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:49:23.715311  975780 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:49:23.715495  975780 main.go:141] libmachine: (multinode-669543) Calling .GetIP
	I0314 18:49:23.718301  975780 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:49:23.718777  975780 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:49:23.718798  975780 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:49:23.718969  975780 host.go:66] Checking if "multinode-669543" exists ...
	I0314 18:49:23.719372  975780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:49:23.719409  975780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:49:23.734044  975780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:44925
	I0314 18:49:23.734568  975780 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:49:23.735064  975780 main.go:141] libmachine: Using API Version  1
	I0314 18:49:23.735098  975780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:49:23.735474  975780 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:49:23.735687  975780 main.go:141] libmachine: (multinode-669543) Calling .DriverName
	I0314 18:49:23.735894  975780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:49:23.735926  975780 main.go:141] libmachine: (multinode-669543) Calling .GetSSHHostname
	I0314 18:49:23.738820  975780 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:49:23.739218  975780 main.go:141] libmachine: (multinode-669543) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:d2:91:c3", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:46:58 +0000 UTC Type:0 Mac:52:54:00:d2:91:c3 Iaid: IPaddr:192.168.39.68 Prefix:24 Hostname:multinode-669543 Clientid:01:52:54:00:d2:91:c3}
	I0314 18:49:23.739238  975780 main.go:141] libmachine: (multinode-669543) DBG | domain multinode-669543 has defined IP address 192.168.39.68 and MAC address 52:54:00:d2:91:c3 in network mk-multinode-669543
	I0314 18:49:23.739425  975780 main.go:141] libmachine: (multinode-669543) Calling .GetSSHPort
	I0314 18:49:23.739608  975780 main.go:141] libmachine: (multinode-669543) Calling .GetSSHKeyPath
	I0314 18:49:23.739760  975780 main.go:141] libmachine: (multinode-669543) Calling .GetSSHUsername
	I0314 18:49:23.739903  975780 sshutil.go:53] new ssh client: &{IP:192.168.39.68 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/multinode-669543/id_rsa Username:docker}
	I0314 18:49:23.823048  975780 ssh_runner.go:195] Run: systemctl --version
	I0314 18:49:23.830685  975780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:49:23.847090  975780 kubeconfig.go:125] found "multinode-669543" server: "https://192.168.39.68:8443"
	I0314 18:49:23.847117  975780 api_server.go:166] Checking apiserver status ...
	I0314 18:49:23.847150  975780 ssh_runner.go:195] Run: sudo pgrep -xnf kube-apiserver.*minikube.*
	I0314 18:49:23.868335  975780 ssh_runner.go:195] Run: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup
	W0314 18:49:23.880646  975780 api_server.go:177] unable to find freezer cgroup: sudo egrep ^[0-9]+:freezer: /proc/1147/cgroup: Process exited with status 1
	stdout:
	
	stderr:
	I0314 18:49:23.880716  975780 ssh_runner.go:195] Run: ls
	I0314 18:49:23.886088  975780 api_server.go:253] Checking apiserver healthz at https://192.168.39.68:8443/healthz ...
	I0314 18:49:23.891970  975780 api_server.go:279] https://192.168.39.68:8443/healthz returned 200:
	ok
	I0314 18:49:23.891998  975780 status.go:422] multinode-669543 apiserver status = Running (err=<nil>)
	I0314 18:49:23.892011  975780 status.go:257] multinode-669543 status: &{Name:multinode-669543 Host:Running Kubelet:Running APIServer:Running Kubeconfig:Configured Worker:false TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:49:23.892036  975780 status.go:255] checking status of multinode-669543-m02 ...
	I0314 18:49:23.892489  975780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:49:23.892529  975780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:49:23.909284  975780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39367
	I0314 18:49:23.909764  975780 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:49:23.910260  975780 main.go:141] libmachine: Using API Version  1
	I0314 18:49:23.910284  975780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:49:23.910620  975780 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:49:23.910801  975780 main.go:141] libmachine: (multinode-669543-m02) Calling .GetState
	I0314 18:49:23.912414  975780 status.go:330] multinode-669543-m02 host status = "Running" (err=<nil>)
	I0314 18:49:23.912433  975780 host.go:66] Checking if "multinode-669543-m02" exists ...
	I0314 18:49:23.912855  975780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:49:23.912900  975780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:49:23.927504  975780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:35345
	I0314 18:49:23.927891  975780 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:49:23.928358  975780 main.go:141] libmachine: Using API Version  1
	I0314 18:49:23.928381  975780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:49:23.928684  975780 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:49:23.928851  975780 main.go:141] libmachine: (multinode-669543-m02) Calling .GetIP
	I0314 18:49:23.931547  975780 main.go:141] libmachine: (multinode-669543-m02) DBG | domain multinode-669543-m02 has defined MAC address 52:54:00:cf:c7:e0 in network mk-multinode-669543
	I0314 18:49:23.931988  975780 main.go:141] libmachine: (multinode-669543-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c7:e0", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:48:02 +0000 UTC Type:0 Mac:52:54:00:cf:c7:e0 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-669543-m02 Clientid:01:52:54:00:cf:c7:e0}
	I0314 18:49:23.932024  975780 main.go:141] libmachine: (multinode-669543-m02) DBG | domain multinode-669543-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:cf:c7:e0 in network mk-multinode-669543
	I0314 18:49:23.932153  975780 host.go:66] Checking if "multinode-669543-m02" exists ...
	I0314 18:49:23.932498  975780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:49:23.932547  975780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:49:23.946912  975780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:41693
	I0314 18:49:23.947318  975780 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:49:23.947714  975780 main.go:141] libmachine: Using API Version  1
	I0314 18:49:23.947737  975780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:49:23.948080  975780 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:49:23.948246  975780 main.go:141] libmachine: (multinode-669543-m02) Calling .DriverName
	I0314 18:49:23.948419  975780 ssh_runner.go:195] Run: sh -c "df -h /var | awk 'NR==2{print $5}'"
	I0314 18:49:23.948440  975780 main.go:141] libmachine: (multinode-669543-m02) Calling .GetSSHHostname
	I0314 18:49:23.951376  975780 main.go:141] libmachine: (multinode-669543-m02) DBG | domain multinode-669543-m02 has defined MAC address 52:54:00:cf:c7:e0 in network mk-multinode-669543
	I0314 18:49:23.951855  975780 main.go:141] libmachine: (multinode-669543-m02) DBG | found host DHCP lease matching {name: "", mac: "52:54:00:cf:c7:e0", ip: ""} in network mk-multinode-669543: {Iface:virbr1 ExpiryTime:2024-03-14 19:48:02 +0000 UTC Type:0 Mac:52:54:00:cf:c7:e0 Iaid: IPaddr:192.168.39.16 Prefix:24 Hostname:multinode-669543-m02 Clientid:01:52:54:00:cf:c7:e0}
	I0314 18:49:23.951893  975780 main.go:141] libmachine: (multinode-669543-m02) DBG | domain multinode-669543-m02 has defined IP address 192.168.39.16 and MAC address 52:54:00:cf:c7:e0 in network mk-multinode-669543
	I0314 18:49:23.952000  975780 main.go:141] libmachine: (multinode-669543-m02) Calling .GetSSHPort
	I0314 18:49:23.952148  975780 main.go:141] libmachine: (multinode-669543-m02) Calling .GetSSHKeyPath
	I0314 18:49:23.952335  975780 main.go:141] libmachine: (multinode-669543-m02) Calling .GetSSHUsername
	I0314 18:49:23.952458  975780 sshutil.go:53] new ssh client: &{IP:192.168.39.16 Port:22 SSHKeyPath:/home/jenkins/minikube-integration/18384-942544/.minikube/machines/multinode-669543-m02/id_rsa Username:docker}
	I0314 18:49:24.035978  975780 ssh_runner.go:195] Run: sudo systemctl is-active --quiet service kubelet
	I0314 18:49:24.050736  975780 status.go:257] multinode-669543-m02 status: &{Name:multinode-669543-m02 Host:Running Kubelet:Running APIServer:Irrelevant Kubeconfig:Irrelevant Worker:true TimeToStop: DockerEnv: PodManEnv:}
	I0314 18:49:24.050771  975780 status.go:255] checking status of multinode-669543-m03 ...
	I0314 18:49:24.051071  975780 main.go:141] libmachine: Found binary path at /home/jenkins/workspace/KVM_Linux_crio_integration/out/docker-machine-driver-kvm2
	I0314 18:49:24.051107  975780 main.go:141] libmachine: Launching plugin server for driver kvm2
	I0314 18:49:24.067001  975780 main.go:141] libmachine: Plugin server listening at address 127.0.0.1:39547
	I0314 18:49:24.067499  975780 main.go:141] libmachine: () Calling .GetVersion
	I0314 18:49:24.067968  975780 main.go:141] libmachine: Using API Version  1
	I0314 18:49:24.067989  975780 main.go:141] libmachine: () Calling .SetConfigRaw
	I0314 18:49:24.068346  975780 main.go:141] libmachine: () Calling .GetMachineName
	I0314 18:49:24.068527  975780 main.go:141] libmachine: (multinode-669543-m03) Calling .GetState
	I0314 18:49:24.069916  975780 status.go:330] multinode-669543-m03 host status = "Stopped" (err=<nil>)
	I0314 18:49:24.069930  975780 status.go:343] host is not running, skipping remaining checks
	I0314 18:49:24.069938  975780 status.go:257] multinode-669543-m03 status: &{Name:multinode-669543-m03 Host:Stopped Kubelet:Stopped APIServer:Stopped Kubeconfig:Stopped Worker:true TimeToStop: DockerEnv: PodManEnv:}

                                                
                                                
** /stderr **
--- PASS: TestMultiNode/serial/StopNode (3.20s)

                                                
                                    
x
+
TestMultiNode/serial/StartAfterStop (29.42s)

                                                
                                                
=== RUN   TestMultiNode/serial/StartAfterStop
multinode_test.go:282: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 node start m03 -v=7 --alsologtostderr
multinode_test.go:282: (dbg) Done: out/minikube-linux-amd64 -p multinode-669543 node start m03 -v=7 --alsologtostderr: (28.78051481s)
multinode_test.go:290: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 status -v=7 --alsologtostderr
multinode_test.go:306: (dbg) Run:  kubectl get nodes
--- PASS: TestMultiNode/serial/StartAfterStop (29.42s)

                                                
                                    
x
+
TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                                
=== RUN   TestMultiNode/serial/DeleteNode
multinode_test.go:416: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 node delete m03
multinode_test.go:416: (dbg) Done: out/minikube-linux-amd64 -p multinode-669543 node delete m03: (1.841557594s)
multinode_test.go:422: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 status --alsologtostderr
multinode_test.go:436: (dbg) Run:  kubectl get nodes
multinode_test.go:444: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/DeleteNode (2.39s)

                                                
                                    
x
+
TestMultiNode/serial/RestartMultiNode (172.9s)

                                                
                                                
=== RUN   TestMultiNode/serial/RestartMultiNode
multinode_test.go:376: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-669543 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio
multinode_test.go:376: (dbg) Done: out/minikube-linux-amd64 start -p multinode-669543 --wait=true -v=8 --alsologtostderr --driver=kvm2  --container-runtime=crio: (2m52.340364276s)
multinode_test.go:382: (dbg) Run:  out/minikube-linux-amd64 -p multinode-669543 status --alsologtostderr
multinode_test.go:396: (dbg) Run:  kubectl get nodes
multinode_test.go:404: (dbg) Run:  kubectl get nodes -o "go-template='{{range .items}}{{range .status.conditions}}{{if eq .type "Ready"}} {{.status}}{{"\n"}}{{end}}{{end}}{{end}}'"
--- PASS: TestMultiNode/serial/RestartMultiNode (172.90s)

                                                
                                    
x
+
TestMultiNode/serial/ValidateNameConflict (48.04s)

                                                
                                                
=== RUN   TestMultiNode/serial/ValidateNameConflict
multinode_test.go:455: (dbg) Run:  out/minikube-linux-amd64 node list -p multinode-669543
multinode_test.go:464: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-669543-m02 --driver=kvm2  --container-runtime=crio
multinode_test.go:464: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p multinode-669543-m02 --driver=kvm2  --container-runtime=crio: exit status 14 (79.049886ms)

                                                
                                                
-- stdout --
	* [multinode-669543-m02] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	! Profile name 'multinode-669543-m02' is duplicated with machine name 'multinode-669543-m02' in profile 'multinode-669543'
	X Exiting due to MK_USAGE: Profile name should be unique

                                                
                                                
** /stderr **
multinode_test.go:472: (dbg) Run:  out/minikube-linux-amd64 start -p multinode-669543-m03 --driver=kvm2  --container-runtime=crio
multinode_test.go:472: (dbg) Done: out/minikube-linux-amd64 start -p multinode-669543-m03 --driver=kvm2  --container-runtime=crio: (46.861743001s)
multinode_test.go:479: (dbg) Run:  out/minikube-linux-amd64 node add -p multinode-669543
multinode_test.go:479: (dbg) Non-zero exit: out/minikube-linux-amd64 node add -p multinode-669543: exit status 80 (231.286778ms)

                                                
                                                
-- stdout --
	* Adding node m03 to cluster multinode-669543 as [worker]
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to GUEST_NODE_ADD: failed to add node: Node multinode-669543-m03 already exists in multinode-669543-m03 profile
	* 
	╭─────────────────────────────────────────────────────────────────────────────────────────────╮
	│                                                                                             │
	│    * If the above advice does not help, please let us know:                                 │
	│      https://github.com/kubernetes/minikube/issues/new/choose                               │
	│                                                                                             │
	│    * Please run `minikube logs --file=logs.txt` and attach logs.txt to the GitHub issue.    │
	│    * Please also attach the following file to the GitHub issue:                             │
	│    * - /tmp/minikube_node_040ea7097fd6ed71e65be9a474587f81f0ccd21d_4.log                    │
	│                                                                                             │
	╰─────────────────────────────────────────────────────────────────────────────────────────────╯

                                                
                                                
** /stderr **
multinode_test.go:484: (dbg) Run:  out/minikube-linux-amd64 delete -p multinode-669543-m03
--- PASS: TestMultiNode/serial/ValidateNameConflict (48.04s)

                                                
                                    
x
+
TestScheduledStopUnix (117.54s)

                                                
                                                
=== RUN   TestScheduledStopUnix
scheduled_stop_test.go:128: (dbg) Run:  out/minikube-linux-amd64 start -p scheduled-stop-020899 --memory=2048 --driver=kvm2  --container-runtime=crio
scheduled_stop_test.go:128: (dbg) Done: out/minikube-linux-amd64 start -p scheduled-stop-020899 --memory=2048 --driver=kvm2  --container-runtime=crio: (45.700514707s)
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-020899 --schedule 5m
scheduled_stop_test.go:191: (dbg) Run:  out/minikube-linux-amd64 status --format={{.TimeToStop}} -p scheduled-stop-020899 -n scheduled-stop-020899
scheduled_stop_test.go:169: signal error was:  <nil>
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-020899 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-020899 --cancel-scheduled
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-020899 -n scheduled-stop-020899
E0314 19:06:57.905913  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-020899
scheduled_stop_test.go:137: (dbg) Run:  out/minikube-linux-amd64 stop -p scheduled-stop-020899 --schedule 15s
scheduled_stop_test.go:169: signal error was:  os: process already finished
E0314 19:07:14.528630  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 19:07:14.854325  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
scheduled_stop_test.go:205: (dbg) Run:  out/minikube-linux-amd64 status -p scheduled-stop-020899
scheduled_stop_test.go:205: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p scheduled-stop-020899: exit status 7 (89.041333ms)

                                                
                                                
-- stdout --
	scheduled-stop-020899
	type: Control Plane
	host: Stopped
	kubelet: Stopped
	apiserver: Stopped
	kubeconfig: Stopped
	

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-020899 -n scheduled-stop-020899
scheduled_stop_test.go:176: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p scheduled-stop-020899 -n scheduled-stop-020899: exit status 7 (83.327383ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
scheduled_stop_test.go:176: status error: exit status 7 (may be ok)
helpers_test.go:175: Cleaning up "scheduled-stop-020899" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p scheduled-stop-020899
--- PASS: TestScheduledStopUnix (117.54s)

                                                
                                    
x
+
TestRunningBinaryUpgrade (204.05s)

                                                
                                                
=== RUN   TestRunningBinaryUpgrade
=== PAUSE TestRunningBinaryUpgrade

                                                
                                                

                                                
                                                
=== CONT  TestRunningBinaryUpgrade
version_upgrade_test.go:120: (dbg) Run:  /tmp/minikube-v1.26.0.2716346202 start -p running-upgrade-082849 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:120: (dbg) Done: /tmp/minikube-v1.26.0.2716346202 start -p running-upgrade-082849 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (1m46.058099554s)
version_upgrade_test.go:130: (dbg) Run:  out/minikube-linux-amd64 start -p running-upgrade-082849 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:130: (dbg) Done: out/minikube-linux-amd64 start -p running-upgrade-082849 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (1m36.247646812s)
helpers_test.go:175: Cleaning up "running-upgrade-082849" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p running-upgrade-082849
helpers_test.go:178: (dbg) Done: out/minikube-linux-amd64 delete -p running-upgrade-082849: (1.174817956s)
--- PASS: TestRunningBinaryUpgrade (204.05s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Setup
--- PASS: TestStoppedBinaryUpgrade/Setup (0.48s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/Upgrade (182.2s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/Upgrade
version_upgrade_test.go:183: (dbg) Run:  /tmp/minikube-v1.26.0.2762740636 start -p stopped-upgrade-993845 --memory=2200 --vm-driver=kvm2  --container-runtime=crio
version_upgrade_test.go:183: (dbg) Done: /tmp/minikube-v1.26.0.2762740636 start -p stopped-upgrade-993845 --memory=2200 --vm-driver=kvm2  --container-runtime=crio: (2m13.315154668s)
version_upgrade_test.go:192: (dbg) Run:  /tmp/minikube-v1.26.0.2762740636 -p stopped-upgrade-993845 stop
version_upgrade_test.go:192: (dbg) Done: /tmp/minikube-v1.26.0.2762740636 -p stopped-upgrade-993845 stop: (2.436642641s)
version_upgrade_test.go:198: (dbg) Run:  out/minikube-linux-amd64 start -p stopped-upgrade-993845 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
version_upgrade_test.go:198: (dbg) Done: out/minikube-linux-amd64 start -p stopped-upgrade-993845 --memory=2200 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (46.446374547s)
--- PASS: TestStoppedBinaryUpgrade/Upgrade (182.20s)

                                                
                                    
x
+
TestPause/serial/Start (107s)

                                                
                                                
=== RUN   TestPause/serial/Start
pause_test.go:80: (dbg) Run:  out/minikube-linux-amd64 start -p pause-812876 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio
pause_test.go:80: (dbg) Done: out/minikube-linux-amd64 start -p pause-812876 --memory=2048 --install-addons=false --wait=all --driver=kvm2  --container-runtime=crio: (1m47.004851544s)
--- PASS: TestPause/serial/Start (107.00s)

                                                
                                    
x
+
TestPause/serial/SecondStartNoReconfiguration (34.98s)

                                                
                                                
=== RUN   TestPause/serial/SecondStartNoReconfiguration
pause_test.go:92: (dbg) Run:  out/minikube-linux-amd64 start -p pause-812876 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio
pause_test.go:92: (dbg) Done: out/minikube-linux-amd64 start -p pause-812876 --alsologtostderr -v=1 --driver=kvm2  --container-runtime=crio: (34.958231645s)
--- PASS: TestPause/serial/SecondStartNoReconfiguration (34.98s)

                                                
                                    
x
+
TestStoppedBinaryUpgrade/MinikubeLogs (1.1s)

                                                
                                                
=== RUN   TestStoppedBinaryUpgrade/MinikubeLogs
version_upgrade_test.go:206: (dbg) Run:  out/minikube-linux-amd64 logs -p stopped-upgrade-993845
version_upgrade_test.go:206: (dbg) Done: out/minikube-linux-amd64 logs -p stopped-upgrade-993845: (1.099810815s)
--- PASS: TestStoppedBinaryUpgrade/MinikubeLogs (1.10s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoK8sWithVersion
no_kubernetes_test.go:83: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-578974 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:83: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p NoKubernetes-578974 --no-kubernetes --kubernetes-version=1.20 --driver=kvm2  --container-runtime=crio: exit status 14 (78.616387ms)

                                                
                                                
-- stdout --
	* [NoKubernetes-578974] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	X Exiting due to MK_USAGE: cannot specify --kubernetes-version with --no-kubernetes,
	to unset a global config run:
	
	$ minikube config unset kubernetes-version

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/StartNoK8sWithVersion (0.08s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithK8s (49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithK8s
no_kubernetes_test.go:95: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-578974 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:95: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-578974 --driver=kvm2  --container-runtime=crio: (48.731378031s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-578974 status -o json
--- PASS: TestNoKubernetes/serial/StartWithK8s (49.00s)

                                                
                                    
x
+
TestPause/serial/Pause (0.83s)

                                                
                                                
=== RUN   TestPause/serial/Pause
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-812876 --alsologtostderr -v=5
--- PASS: TestPause/serial/Pause (0.83s)

                                                
                                    
x
+
TestPause/serial/VerifyStatus (0.26s)

                                                
                                                
=== RUN   TestPause/serial/VerifyStatus
status_test.go:76: (dbg) Run:  out/minikube-linux-amd64 status -p pause-812876 --output=json --layout=cluster
status_test.go:76: (dbg) Non-zero exit: out/minikube-linux-amd64 status -p pause-812876 --output=json --layout=cluster: exit status 2 (264.060444ms)

                                                
                                                
-- stdout --
	{"Name":"pause-812876","StatusCode":418,"StatusName":"Paused","Step":"Done","StepDetail":"* Paused 6 containers in: kube-system, kubernetes-dashboard, storage-gluster, istio-operator","BinaryVersion":"v1.32.0","Components":{"kubeconfig":{"Name":"kubeconfig","StatusCode":200,"StatusName":"OK"}},"Nodes":[{"Name":"pause-812876","StatusCode":200,"StatusName":"OK","Components":{"apiserver":{"Name":"apiserver","StatusCode":418,"StatusName":"Paused"},"kubelet":{"Name":"kubelet","StatusCode":405,"StatusName":"Stopped"}}}]}

                                                
                                                
-- /stdout --
--- PASS: TestPause/serial/VerifyStatus (0.26s)

                                                
                                    
x
+
TestPause/serial/Unpause (0.76s)

                                                
                                                
=== RUN   TestPause/serial/Unpause
pause_test.go:121: (dbg) Run:  out/minikube-linux-amd64 unpause -p pause-812876 --alsologtostderr -v=5
--- PASS: TestPause/serial/Unpause (0.76s)

                                                
                                    
x
+
TestPause/serial/PauseAgain (1s)

                                                
                                                
=== RUN   TestPause/serial/PauseAgain
pause_test.go:110: (dbg) Run:  out/minikube-linux-amd64 pause -p pause-812876 --alsologtostderr -v=5
pause_test.go:110: (dbg) Done: out/minikube-linux-amd64 pause -p pause-812876 --alsologtostderr -v=5: (1.002507767s)
--- PASS: TestPause/serial/PauseAgain (1.00s)

                                                
                                    
x
+
TestPause/serial/DeletePaused (1.08s)

                                                
                                                
=== RUN   TestPause/serial/DeletePaused
pause_test.go:132: (dbg) Run:  out/minikube-linux-amd64 delete -p pause-812876 --alsologtostderr -v=5
pause_test.go:132: (dbg) Done: out/minikube-linux-amd64 delete -p pause-812876 --alsologtostderr -v=5: (1.079501348s)
--- PASS: TestPause/serial/DeletePaused (1.08s)

                                                
                                    
x
+
TestNetworkPlugins/group/false (3.48s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/false
net_test.go:246: (dbg) Run:  out/minikube-linux-amd64 start -p false-058224 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio
net_test.go:246: (dbg) Non-zero exit: out/minikube-linux-amd64 start -p false-058224 --memory=2048 --alsologtostderr --cni=false --driver=kvm2  --container-runtime=crio: exit status 14 (209.437063ms)

                                                
                                                
-- stdout --
	* [false-058224] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	  - MINIKUBE_LOCATION=18384
	  - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	  - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	  - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	  - MINIKUBE_BIN=out/minikube-linux-amd64
	  - MINIKUBE_FORCE_SYSTEMD=
	* Using the kvm2 driver based on user configuration
	
	

                                                
                                                
-- /stdout --
** stderr ** 
	I0314 19:11:11.653107  984529 out.go:291] Setting OutFile to fd 1 ...
	I0314 19:11:11.653255  984529 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:11:11.653266  984529 out.go:304] Setting ErrFile to fd 2...
	I0314 19:11:11.653270  984529 out.go:338] TERM=,COLORTERM=, which probably does not support color
	I0314 19:11:11.653474  984529 root.go:338] Updating PATH: /home/jenkins/minikube-integration/18384-942544/.minikube/bin
	I0314 19:11:11.654065  984529 out.go:298] Setting JSON to false
	I0314 19:11:11.655074  984529 start.go:129] hostinfo: {"hostname":"ubuntu-20-agent-13","uptime":96824,"bootTime":1710346648,"procs":229,"os":"linux","platform":"ubuntu","platformFamily":"debian","platformVersion":"20.04","kernelVersion":"5.15.0-1053-gcp","kernelArch":"x86_64","virtualizationSystem":"kvm","virtualizationRole":"guest","hostId":"591c9f12-2938-3743-e2bf-c56a050d43d1"}
	I0314 19:11:11.655141  984529 start.go:139] virtualization: kvm guest
	I0314 19:11:11.657497  984529 out.go:177] * [false-058224] minikube v1.32.0 on Ubuntu 20.04 (kvm/amd64)
	I0314 19:11:11.659214  984529 out.go:177]   - MINIKUBE_LOCATION=18384
	I0314 19:11:11.659248  984529 notify.go:220] Checking for updates...
	I0314 19:11:11.660605  984529 out.go:177]   - MINIKUBE_SUPPRESS_DOCKER_PERFORMANCE=true
	I0314 19:11:11.661994  984529 out.go:177]   - KUBECONFIG=/home/jenkins/minikube-integration/18384-942544/kubeconfig
	I0314 19:11:11.663319  984529 out.go:177]   - MINIKUBE_HOME=/home/jenkins/minikube-integration/18384-942544/.minikube
	I0314 19:11:11.664637  984529 out.go:177]   - MINIKUBE_BIN=out/minikube-linux-amd64
	I0314 19:11:11.665800  984529 out.go:177]   - MINIKUBE_FORCE_SYSTEMD=
	I0314 19:11:11.667527  984529 config.go:182] Loaded profile config "NoKubernetes-578974": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:11:11.667694  984529 config.go:182] Loaded profile config "kubernetes-upgrade-097195": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.20.0
	I0314 19:11:11.667852  984529 config.go:182] Loaded profile config "pause-812876": Driver=kvm2, ContainerRuntime=crio, KubernetesVersion=v1.28.4
	I0314 19:11:11.667962  984529 driver.go:392] Setting default libvirt URI to qemu:///system
	I0314 19:11:11.798607  984529 out.go:177] * Using the kvm2 driver based on user configuration
	I0314 19:11:11.799897  984529 start.go:297] selected driver: kvm2
	I0314 19:11:11.799915  984529 start.go:901] validating driver "kvm2" against <nil>
	I0314 19:11:11.799928  984529 start.go:912] status for kvm2: {Installed:true Healthy:true Running:true NeedsImprovement:false Error:<nil> Reason: Fix: Doc: Version:}
	I0314 19:11:11.801940  984529 out.go:177] 
	W0314 19:11:11.803412  984529 out.go:239] X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	X Exiting due to MK_USAGE: The "crio" container runtime requires CNI
	I0314 19:11:11.804682  984529 out.go:177] 

                                                
                                                
** /stderr **
net_test.go:88: 
----------------------- debugLogs start: false-058224 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "false-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "false-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "false-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "false-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "false-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "false-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "false-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "false-058224" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "false-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "false-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "false-058224" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: false-058224

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "false-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p false-058224"

                                                
                                                
----------------------- debugLogs end: false-058224 [took: 3.10412994s] --------------------------------
helpers_test.go:175: Cleaning up "false-058224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p false-058224
--- PASS: TestNetworkPlugins/group/false (3.48s)

                                                
                                    
x
+
TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                                
=== RUN   TestPause/serial/VerifyDeletedResources
pause_test.go:142: (dbg) Run:  out/minikube-linux-amd64 profile list --output json
--- PASS: TestPause/serial/VerifyDeletedResources (0.27s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartWithStopK8s (40.61s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartWithStopK8s
no_kubernetes_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-578974 --no-kubernetes --driver=kvm2  --container-runtime=crio
E0314 19:11:57.577461  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
no_kubernetes_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-578974 --no-kubernetes --driver=kvm2  --container-runtime=crio: (39.308488632s)
no_kubernetes_test.go:200: (dbg) Run:  out/minikube-linux-amd64 -p NoKubernetes-578974 status -o json
no_kubernetes_test.go:200: (dbg) Non-zero exit: out/minikube-linux-amd64 -p NoKubernetes-578974 status -o json: exit status 2 (248.485327ms)

                                                
                                                
-- stdout --
	{"Name":"NoKubernetes-578974","Host":"Running","Kubelet":"Stopped","APIServer":"Stopped","Kubeconfig":"Configured","Worker":false}

                                                
                                                
-- /stdout --
no_kubernetes_test.go:124: (dbg) Run:  out/minikube-linux-amd64 delete -p NoKubernetes-578974
no_kubernetes_test.go:124: (dbg) Done: out/minikube-linux-amd64 delete -p NoKubernetes-578974: (1.056397494s)
--- PASS: TestNoKubernetes/serial/StartWithStopK8s (40.61s)

                                                
                                    
x
+
TestNoKubernetes/serial/Start (50.37s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Start
no_kubernetes_test.go:136: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-578974 --no-kubernetes --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:136: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-578974 --no-kubernetes --driver=kvm2  --container-runtime=crio: (50.372043487s)
--- PASS: TestNoKubernetes/serial/Start (50.37s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunning
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-578974 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-578974 "sudo systemctl is-active --quiet service kubelet": exit status 1 (214.57126ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunning (0.21s)

                                                
                                    
x
+
TestNoKubernetes/serial/ProfileList (0.9s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/ProfileList
no_kubernetes_test.go:169: (dbg) Run:  out/minikube-linux-amd64 profile list
no_kubernetes_test.go:179: (dbg) Run:  out/minikube-linux-amd64 profile list --output=json
--- PASS: TestNoKubernetes/serial/ProfileList (0.90s)

                                                
                                    
x
+
TestNoKubernetes/serial/Stop (1.49s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/Stop
no_kubernetes_test.go:158: (dbg) Run:  out/minikube-linux-amd64 stop -p NoKubernetes-578974
no_kubernetes_test.go:158: (dbg) Done: out/minikube-linux-amd64 stop -p NoKubernetes-578974: (1.490222606s)
--- PASS: TestNoKubernetes/serial/Stop (1.49s)

                                                
                                    
x
+
TestNoKubernetes/serial/StartNoArgs (64.98s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/StartNoArgs
no_kubernetes_test.go:191: (dbg) Run:  out/minikube-linux-amd64 start -p NoKubernetes-578974 --driver=kvm2  --container-runtime=crio
no_kubernetes_test.go:191: (dbg) Done: out/minikube-linux-amd64 start -p NoKubernetes-578974 --driver=kvm2  --container-runtime=crio: (1m4.983446189s)
--- PASS: TestNoKubernetes/serial/StartNoArgs (64.98s)

                                                
                                    
x
+
TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                                
=== RUN   TestNoKubernetes/serial/VerifyK8sNotRunningSecond
no_kubernetes_test.go:147: (dbg) Run:  out/minikube-linux-amd64 ssh -p NoKubernetes-578974 "sudo systemctl is-active --quiet service kubelet"
no_kubernetes_test.go:147: (dbg) Non-zero exit: out/minikube-linux-amd64 ssh -p NoKubernetes-578974 "sudo systemctl is-active --quiet service kubelet": exit status 1 (218.02001ms)

                                                
                                                
** stderr ** 
	ssh: Process exited with status 3

                                                
                                                
** /stderr **
--- PASS: TestNoKubernetes/serial/VerifyK8sNotRunningSecond (0.22s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/FirstStart (150.86s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-731976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-731976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (2m30.858402206s)
--- PASS: TestStartStop/group/no-preload/serial/FirstStart (150.86s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/FirstStart (97.05s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-992669 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-992669 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m37.047063213s)
--- PASS: TestStartStop/group/embed-certs/serial/FirstStart (97.05s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.51s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-440341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-440341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (1m40.512235092s)
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/FirstStart (100.51s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/DeployApp (7.32s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-731976 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [03b44efa-57c3-4ad4-869b-a23129e8aeb1] Pending
helpers_test.go:344: "busybox" [03b44efa-57c3-4ad4-869b-a23129e8aeb1] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [03b44efa-57c3-4ad4-869b-a23129e8aeb1] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/no-preload/serial/DeployApp: integration-test=busybox healthy within 7.005663484s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context no-preload-731976 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/no-preload/serial/DeployApp (7.32s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p no-preload-731976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p no-preload-731976 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.05305203s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context no-preload-731976 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/no-preload/serial/EnableAddonWhileActive (1.14s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-992669 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [31cd73a4-aac3-4a23-863d-be39b65c376f] Pending
helpers_test.go:344: "busybox" [31cd73a4-aac3-4a23-863d-be39b65c376f] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [31cd73a4-aac3-4a23-863d-be39b65c376f] Running
E0314 19:17:14.527599  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 19:17:14.853756  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
start_stop_delete_test.go:196: (dbg) TestStartStop/group/embed-certs/serial/DeployApp: integration-test=busybox healthy within 9.004412456s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context embed-certs-992669 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/embed-certs/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-992669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p embed-certs-992669 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.063171686s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context embed-certs-992669 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/embed-certs/serial/EnableAddonWhileActive (1.15s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.3s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/DeployApp
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-440341 create -f testdata/busybox.yaml
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: waiting 8m0s for pods matching "integration-test=busybox" in namespace "default" ...
helpers_test.go:344: "busybox" [f743eb49-e8e5-4dd5-984e-e8d77a7b9000] Pending / Ready:ContainersNotReady (containers with unready status: [busybox]) / ContainersReady:ContainersNotReady (containers with unready status: [busybox])
helpers_test.go:344: "busybox" [f743eb49-e8e5-4dd5-984e-e8d77a7b9000] Running
start_stop_delete_test.go:196: (dbg) TestStartStop/group/default-k8s-diff-port/serial/DeployApp: integration-test=busybox healthy within 9.004113766s
start_stop_delete_test.go:196: (dbg) Run:  kubectl --context default-k8s-diff-port-440341 exec busybox -- /bin/sh -c "ulimit -n"
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/DeployApp (9.30s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-440341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p default-k8s-diff-port-440341 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.056651145s)
start_stop_delete_test.go:215: (dbg) Run:  kubectl --context default-k8s-diff-port-440341 describe deploy/metrics-server -n kube-system
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/EnableAddonWhileActive (1.13s)

                                                
                                    
x
+
TestStartStop/group/no-preload/serial/SecondStart (660.56s)

                                                
                                                
=== RUN   TestStartStop/group/no-preload/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p no-preload-731976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p no-preload-731976 --memory=2200 --alsologtostderr --wait=true --preload=false --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (11m0.278465804s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p no-preload-731976 -n no-preload-731976
--- PASS: TestStartStop/group/no-preload/serial/SecondStart (660.56s)

                                                
                                    
x
+
TestStartStop/group/embed-certs/serial/SecondStart (602.56s)

                                                
                                                
=== RUN   TestStartStop/group/embed-certs/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p embed-certs-992669 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p embed-certs-992669 --memory=2200 --alsologtostderr --wait=true --embed-certs --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (10m2.264355014s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p embed-certs-992669 -n embed-certs-992669
--- PASS: TestStartStop/group/embed-certs/serial/SecondStart (602.56s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/Stop (4.31s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p old-k8s-version-968094 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p old-k8s-version-968094 --alsologtostderr -v=3: (4.307519422s)
--- PASS: TestStartStop/group/old-k8s-version/serial/Stop (4.31s)

                                                
                                    
x
+
TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                                
=== RUN   TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p old-k8s-version-968094 -n old-k8s-version-968094: exit status 7 (75.661368ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p old-k8s-version-968094 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/old-k8s-version/serial/EnableAddonAfterStop (0.21s)

                                                
                                    
x
+
TestStartStop/group/default-k8s-diff-port/serial/SecondStart (571.52s)

                                                
                                                
=== RUN   TestStartStop/group/default-k8s-diff-port/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p default-k8s-diff-port-440341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4
E0314 19:22:14.527857  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 19:22:14.854472  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
E0314 19:23:37.907025  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
E0314 19:27:14.528483  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 19:27:14.854636  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
E0314 19:28:37.577999  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p default-k8s-diff-port-440341 --memory=2200 --alsologtostderr --wait=true --apiserver-port=8444 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.28.4: (9m31.23098744s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p default-k8s-diff-port-440341 -n default-k8s-diff-port-440341
--- PASS: TestStartStop/group/default-k8s-diff-port/serial/SecondStart (571.52s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/FirstStart (60.29s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/FirstStart
start_stop_delete_test.go:186: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-549136 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
start_stop_delete_test.go:186: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-549136 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (1m0.287859141s)
--- PASS: TestStartStop/group/newest-cni/serial/FirstStart (60.29s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Start (102.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p auto-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p auto-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --driver=kvm2  --container-runtime=crio: (1m42.947249716s)
--- PASS: TestNetworkPlugins/group/auto/Start (102.95s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/DeployApp (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/DeployApp
--- PASS: TestStartStop/group/newest-cni/serial/DeployApp (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.53s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonWhileActive
start_stop_delete_test.go:205: (dbg) Run:  out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-549136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain
start_stop_delete_test.go:205: (dbg) Done: out/minikube-linux-amd64 addons enable metrics-server -p newest-cni-549136 --images=MetricsServer=registry.k8s.io/echoserver:1.4 --registries=MetricsServer=fake.domain: (1.533990108s)
start_stop_delete_test.go:211: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonWhileActive (1.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Stop (10.63s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Stop
start_stop_delete_test.go:228: (dbg) Run:  out/minikube-linux-amd64 stop -p newest-cni-549136 --alsologtostderr -v=3
start_stop_delete_test.go:228: (dbg) Done: out/minikube-linux-amd64 stop -p newest-cni-549136 --alsologtostderr -v=3: (10.629872685s)
--- PASS: TestStartStop/group/newest-cni/serial/Stop (10.63s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/EnableAddonAfterStop
start_stop_delete_test.go:239: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-549136 -n newest-cni-549136
start_stop_delete_test.go:239: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-549136 -n newest-cni-549136: exit status 7 (96.773318ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:239: status error: exit status 7 (may be ok)
start_stop_delete_test.go:246: (dbg) Run:  out/minikube-linux-amd64 addons enable dashboard -p newest-cni-549136 --images=MetricsScraper=registry.k8s.io/echoserver:1.4
--- PASS: TestStartStop/group/newest-cni/serial/EnableAddonAfterStop (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/SecondStart (42.49s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/SecondStart
start_stop_delete_test.go:256: (dbg) Run:  out/minikube-linux-amd64 start -p newest-cni-549136 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2
E0314 19:45:17.578219  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
start_stop_delete_test.go:256: (dbg) Done: out/minikube-linux-amd64 start -p newest-cni-549136 --memory=2200 --alsologtostderr --wait=apiserver,system_pods,default_sa --feature-gates ServerSideApply=true --network-plugin=cni --extra-config=kubeadm.pod-network-cidr=10.42.0.0/16 --driver=kvm2  --container-runtime=crio --kubernetes-version=v1.29.0-rc.2: (42.184573038s)
start_stop_delete_test.go:262: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Host}} -p newest-cni-549136 -n newest-cni-549136
--- PASS: TestStartStop/group/newest-cni/serial/SecondStart (42.49s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Start (88.53s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p flannel-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p flannel-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=flannel --driver=kvm2  --container-runtime=crio: (1m28.531018417s)
--- PASS: TestNetworkPlugins/group/flannel/Start (88.53s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop
start_stop_delete_test.go:273: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/UserAppExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/AddonExistsAfterStop
start_stop_delete_test.go:284: WARNING: cni mode requires additional setup before pods can schedule :(
--- PASS: TestStartStop/group/newest-cni/serial/AddonExistsAfterStop (0.00s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/VerifyKubernetesImages
start_stop_delete_test.go:304: (dbg) Run:  out/minikube-linux-amd64 -p newest-cni-549136 image list --format=json
start_stop_delete_test.go:304: Found non-minikube image: kindest/kindnetd:v20230809-80a64d96
--- PASS: TestStartStop/group/newest-cni/serial/VerifyKubernetesImages (0.24s)

                                                
                                    
x
+
TestStartStop/group/newest-cni/serial/Pause (2.69s)

                                                
                                                
=== RUN   TestStartStop/group/newest-cni/serial/Pause
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 pause -p newest-cni-549136 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-549136 -n newest-cni-549136
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-549136 -n newest-cni-549136: exit status 2 (271.071334ms)

                                                
                                                
-- stdout --
	Paused

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-549136 -n newest-cni-549136
start_stop_delete_test.go:311: (dbg) Non-zero exit: out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-549136 -n newest-cni-549136: exit status 2 (288.147451ms)

                                                
                                                
-- stdout --
	Stopped

                                                
                                                
-- /stdout --
start_stop_delete_test.go:311: status error: exit status 2 (may be ok)
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 unpause -p newest-cni-549136 --alsologtostderr -v=1
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.APIServer}} -p newest-cni-549136 -n newest-cni-549136
start_stop_delete_test.go:311: (dbg) Run:  out/minikube-linux-amd64 status --format={{.Kubelet}} -p newest-cni-549136 -n newest-cni-549136
--- PASS: TestStartStop/group/newest-cni/serial/Pause (2.69s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Start (111.05s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p enable-default-cni-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p enable-default-cni-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --enable-default-cni=true --driver=kvm2  --container-runtime=crio: (1m51.051074761s)
--- PASS: TestNetworkPlugins/group/enable-default-cni/Start (111.05s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p auto-058224 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/auto/KubeletFlags (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context auto-058224 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-r9gpp" [ef818ec8-4147-4f69-bc73-6aa032c1d8fd] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-r9gpp" [ef818ec8-4147-4f69-bc73-6aa032c1d8fd] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/auto/NetCatPod: app=netcat healthy within 12.004441199s
--- PASS: TestNetworkPlugins/group/auto/NetCatPod (12.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/DNS
net_test.go:175: (dbg) Run:  kubectl --context auto-058224 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/auto/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/Localhost
net_test.go:194: (dbg) Run:  kubectl --context auto-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/auto/Localhost (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/auto/HairPin
net_test.go:264: (dbg) Run:  kubectl --context auto-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/auto/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Start (106.38s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p bridge-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p bridge-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=bridge --driver=kvm2  --container-runtime=crio: (1m46.377773508s)
--- PASS: TestNetworkPlugins/group/bridge/Start (106.38s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Start (114.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p calico-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio
E0314 19:46:48.878687  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
E0314 19:46:48.884310  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
E0314 19:46:48.894635  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
E0314 19:46:48.914930  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
E0314 19:46:48.955248  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
E0314 19:46:49.035630  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
E0314 19:46:49.196227  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
E0314 19:46:49.516821  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
E0314 19:46:50.157478  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
E0314 19:46:51.438426  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
E0314 19:46:53.999503  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
E0314 19:46:59.120319  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p calico-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=calico --driver=kvm2  --container-runtime=crio: (1m54.223452696s)
--- PASS: TestNetworkPlugins/group/calico/Start (114.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: waiting 10m0s for pods matching "app=flannel" in namespace "kube-flannel" ...
helpers_test.go:344: "kube-flannel-ds-nkk4w" [87bdb460-a622-48f0-8a73-bf8c845237b5] Running
E0314 19:47:09.361225  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/no-preload-731976/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/flannel/ControllerPod: app=flannel healthy within 6.005253699s
--- PASS: TestNetworkPlugins/group/flannel/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p flannel-058224 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/flannel/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context flannel-058224 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-dnj2b" [38b9e1bb-8b61-4693-9674-0d0c241d5894] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0314 19:47:14.528659  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/functional-059245/client.crt: no such file or directory
E0314 19:47:14.854406  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/addons-677681/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-dnj2b" [38b9e1bb-8b61-4693-9674-0d0c241d5894] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/flannel/NetCatPod: app=netcat healthy within 12.003920956s
--- PASS: TestNetworkPlugins/group/flannel/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context flannel-058224 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/flannel/DNS (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context flannel-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context flannel-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/flannel/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Start (77.95s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p kindnet-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p kindnet-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=kindnet --driver=kvm2  --container-runtime=crio: (1m17.948226266s)
--- PASS: TestNetworkPlugins/group/kindnet/Start (77.95s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p enable-default-cni-058224 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/enable-default-cni/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context enable-default-cni-058224 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-qs486" [4b581c5e-1414-47e8-969a-abd92eab7a95] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-qs486" [4b581c5e-1414-47e8-969a-abd92eab7a95] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/enable-default-cni/NetCatPod: app=netcat healthy within 12.004186956s
--- PASS: TestNetworkPlugins/group/enable-default-cni/NetCatPod (12.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/DNS
net_test.go:175: (dbg) Run:  kubectl --context enable-default-cni-058224 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/enable-default-cni/DNS (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/Localhost
net_test.go:194: (dbg) Run:  kubectl --context enable-default-cni-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/enable-default-cni/HairPin
net_test.go:264: (dbg) Run:  kubectl --context enable-default-cni-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/enable-default-cni/HairPin (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Start (86.13s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Start
net_test.go:112: (dbg) Run:  out/minikube-linux-amd64 start -p custom-flannel-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio
net_test.go:112: (dbg) Done: out/minikube-linux-amd64 start -p custom-flannel-058224 --memory=3072 --alsologtostderr --wait=true --wait-timeout=15m --cni=testdata/kube-flannel.yaml --driver=kvm2  --container-runtime=crio: (1m26.127197213s)
--- PASS: TestNetworkPlugins/group/custom-flannel/Start (86.13s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/KubeletFlags (0.3s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p bridge-058224 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/bridge/KubeletFlags (0.30s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/NetCatPod (14.31s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context bridge-058224 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-gkrxf" [ff90e76a-ffd2-4ac1-ad2e-9663cc55a221] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0314 19:48:24.712122  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.crt: no such file or directory
E0314 19:48:24.747595  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
E0314 19:48:24.753012  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
E0314 19:48:24.764001  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
E0314 19:48:24.784227  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
E0314 19:48:24.824998  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
E0314 19:48:24.905810  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
E0314 19:48:25.066334  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
E0314 19:48:25.386740  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
E0314 19:48:26.027572  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
E0314 19:48:27.307840  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-gkrxf" [ff90e76a-ffd2-4ac1-ad2e-9663cc55a221] Running
E0314 19:48:29.868787  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
net_test.go:163: (dbg) TestNetworkPlugins/group/bridge/NetCatPod: app=netcat healthy within 14.005591616s
--- PASS: TestNetworkPlugins/group/bridge/NetCatPod (14.31s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/DNS
net_test.go:175: (dbg) Run:  kubectl --context bridge-058224 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/bridge/DNS (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/Localhost
net_test.go:194: (dbg) Run:  kubectl --context bridge-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/bridge/Localhost (0.19s)

                                                
                                    
x
+
TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/bridge/HairPin
net_test.go:264: (dbg) Run:  kubectl --context bridge-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
E0314 19:48:34.992365  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
--- PASS: TestNetworkPlugins/group/bridge/HairPin (0.23s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: waiting 10m0s for pods matching "k8s-app=calico-node" in namespace "kube-system" ...
helpers_test.go:344: "calico-node-kzcq6" [f7a91fbe-42c6-402a-89bb-d2111a86982c] Running
E0314 19:48:45.193390  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/old-k8s-version-968094/client.crt: no such file or directory
E0314 19:48:45.232500  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
net_test.go:120: (dbg) TestNetworkPlugins/group/calico/ControllerPod: k8s-app=calico-node healthy within 6.006733629s
--- PASS: TestNetworkPlugins/group/calico/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p calico-058224 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/calico/KubeletFlags (0.25s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context calico-058224 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-x25vb" [56e8f92c-4dd1-4693-bc08-745ed534f4e5] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-x25vb" [56e8f92c-4dd1-4693-bc08-745ed534f4e5] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/calico/NetCatPod: app=netcat healthy within 12.005197931s
--- PASS: TestNetworkPlugins/group/calico/NetCatPod (12.32s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/ControllerPod
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: waiting 10m0s for pods matching "app=kindnet" in namespace "kube-system" ...
helpers_test.go:344: "kindnet-pqdz8" [7ea5377f-b189-4793-b434-d87accb6a47b] Running
net_test.go:120: (dbg) TestNetworkPlugins/group/kindnet/ControllerPod: app=kindnet healthy within 6.009247657s
--- PASS: TestNetworkPlugins/group/kindnet/ControllerPod (6.01s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/DNS
net_test.go:175: (dbg) Run:  kubectl --context calico-058224 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/calico/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/Localhost
net_test.go:194: (dbg) Run:  kubectl --context calico-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/calico/Localhost (0.15s)

                                                
                                    
x
+
TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/calico/HairPin
net_test.go:264: (dbg) Run:  kubectl --context calico-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/calico/HairPin (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p kindnet-058224 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/kindnet/KubeletFlags (0.24s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/NetCatPod (12.28s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context kindnet-058224 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-sllcj" [9b56bf9b-fd71-4357-8d38-80c21713d450] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
helpers_test.go:344: "netcat-56589dfd74-sllcj" [9b56bf9b-fd71-4357-8d38-80c21713d450] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/kindnet/NetCatPod: app=netcat healthy within 12.004423355s
--- PASS: TestNetworkPlugins/group/kindnet/NetCatPod (12.28s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/DNS
net_test.go:175: (dbg) Run:  kubectl --context kindnet-058224 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/kindnet/DNS (0.18s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/Localhost
net_test.go:194: (dbg) Run:  kubectl --context kindnet-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/kindnet/Localhost (0.14s)

                                                
                                    
x
+
TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kindnet/HairPin
net_test.go:264: (dbg) Run:  kubectl --context kindnet-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/kindnet/HairPin (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/KubeletFlags
net_test.go:133: (dbg) Run:  out/minikube-linux-amd64 ssh -p custom-flannel-058224 "pgrep -a kubelet"
--- PASS: TestNetworkPlugins/group/custom-flannel/KubeletFlags (0.22s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/NetCatPod
net_test.go:149: (dbg) Run:  kubectl --context custom-flannel-058224 replace --force -f testdata/netcat-deployment.yaml
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: waiting 15m0s for pods matching "app=netcat" in namespace "default" ...
helpers_test.go:344: "netcat-56589dfd74-75gsf" [1dafad63-988f-46f0-a2c0-1ee74fa66a86] Pending / Ready:ContainersNotReady (containers with unready status: [dnsutils]) / ContainersReady:ContainersNotReady (containers with unready status: [dnsutils])
E0314 19:49:46.674490  951311 cert_rotation.go:168] key failed with : open /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/default-k8s-diff-port-440341/client.crt: no such file or directory
helpers_test.go:344: "netcat-56589dfd74-75gsf" [1dafad63-988f-46f0-a2c0-1ee74fa66a86] Running
net_test.go:163: (dbg) TestNetworkPlugins/group/custom-flannel/NetCatPod: app=netcat healthy within 10.00468883s
--- PASS: TestNetworkPlugins/group/custom-flannel/NetCatPod (10.21s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/DNS
net_test.go:175: (dbg) Run:  kubectl --context custom-flannel-058224 exec deployment/netcat -- nslookup kubernetes.default
--- PASS: TestNetworkPlugins/group/custom-flannel/DNS (0.17s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/Localhost
net_test.go:194: (dbg) Run:  kubectl --context custom-flannel-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z localhost 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/Localhost (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/custom-flannel/HairPin
net_test.go:264: (dbg) Run:  kubectl --context custom-flannel-058224 exec deployment/netcat -- /bin/sh -c "nc -w 5 -i 5 -z netcat 8080"
--- PASS: TestNetworkPlugins/group/custom-flannel/HairPin (0.14s)

                                                
                                    

Test skip (39/325)

Order skiped test Duration
5 TestDownloadOnly/v1.20.0/cached-images 0
6 TestDownloadOnly/v1.20.0/binaries 0
7 TestDownloadOnly/v1.20.0/kubectl 0
14 TestDownloadOnly/v1.28.4/cached-images 0
15 TestDownloadOnly/v1.28.4/binaries 0
16 TestDownloadOnly/v1.28.4/kubectl 0
23 TestDownloadOnly/v1.29.0-rc.2/cached-images 0
24 TestDownloadOnly/v1.29.0-rc.2/binaries 0
25 TestDownloadOnly/v1.29.0-rc.2/kubectl 0
29 TestDownloadOnlyKic 0
43 TestAddons/parallel/Olm 0
56 TestDockerFlags 0
59 TestDockerEnvContainerd 0
61 TestHyperKitDriverInstallOrUpdate 0
62 TestHyperkitDriverSkipUpgrade 0
113 TestFunctional/parallel/DockerEnv 0
114 TestFunctional/parallel/PodmanEnv 0
121 TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel 0.01
122 TestFunctional/parallel/TunnelCmd/serial/StartTunnel 0.01
123 TestFunctional/parallel/TunnelCmd/serial/WaitService 0.01
124 TestFunctional/parallel/TunnelCmd/serial/AccessDirect 0.01
125 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig 0.01
126 TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil 0.01
127 TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS 0.01
128 TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel 0.01
162 TestGvisorAddon 0
184 TestImageBuild 0
211 TestKicCustomNetwork 0
212 TestKicExistingNetwork 0
213 TestKicCustomSubnet 0
214 TestKicStaticIP 0
246 TestChangeNoneUser 0
249 TestScheduledStopWindows 0
251 TestSkaffold 0
253 TestInsufficientStorage 0
257 TestMissingContainerUpgrade 0
265 TestStartStop/group/disable-driver-mounts 0.16
277 TestNetworkPlugins/group/kubenet 3.45
290 TestNetworkPlugins/group/cilium 4.19
x
+
TestDownloadOnly/v1.20.0/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.20.0/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.20.0/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.20.0/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.20.0/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.20.0/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.28.4/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.28.4/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.28.4/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.28.4/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.28.4/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/cached-images (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/cached-images
aaa_download_only_test.go:129: Preload exists, images won't be cached
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/cached-images (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/binaries (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/binaries
aaa_download_only_test.go:151: Preload exists, binaries are present within.
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/binaries (0.00s)

                                                
                                    
x
+
TestDownloadOnly/v1.29.0-rc.2/kubectl (0s)

                                                
                                                
=== RUN   TestDownloadOnly/v1.29.0-rc.2/kubectl
aaa_download_only_test.go:167: Test for darwin and windows
--- SKIP: TestDownloadOnly/v1.29.0-rc.2/kubectl (0.00s)

                                                
                                    
x
+
TestDownloadOnlyKic (0s)

                                                
                                                
=== RUN   TestDownloadOnlyKic
aaa_download_only_test.go:220: skipping, only for docker or podman driver
--- SKIP: TestDownloadOnlyKic (0.00s)

                                                
                                    
x
+
TestAddons/parallel/Olm (0s)

                                                
                                                
=== RUN   TestAddons/parallel/Olm
=== PAUSE TestAddons/parallel/Olm

                                                
                                                

                                                
                                                
=== CONT  TestAddons/parallel/Olm
addons_test.go:498: Skipping OLM addon test until https://github.com/operator-framework/operator-lifecycle-manager/issues/2534 is resolved
--- SKIP: TestAddons/parallel/Olm (0.00s)

                                                
                                    
x
+
TestDockerFlags (0s)

                                                
                                                
=== RUN   TestDockerFlags
docker_test.go:41: skipping: only runs with docker container runtime, currently testing crio
--- SKIP: TestDockerFlags (0.00s)

                                                
                                    
x
+
TestDockerEnvContainerd (0s)

                                                
                                                
=== RUN   TestDockerEnvContainerd
docker_test.go:170: running with crio false linux amd64
docker_test.go:172: skipping: TestDockerEnvContainerd can only be run with the containerd runtime on Docker driver
--- SKIP: TestDockerEnvContainerd (0.00s)

                                                
                                    
x
+
TestHyperKitDriverInstallOrUpdate (0s)

                                                
                                                
=== RUN   TestHyperKitDriverInstallOrUpdate
driver_install_or_update_test.go:105: Skip if not darwin.
--- SKIP: TestHyperKitDriverInstallOrUpdate (0.00s)

                                                
                                    
x
+
TestHyperkitDriverSkipUpgrade (0s)

                                                
                                                
=== RUN   TestHyperkitDriverSkipUpgrade
driver_install_or_update_test.go:169: Skip if not darwin.
--- SKIP: TestHyperkitDriverSkipUpgrade (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/DockerEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/DockerEnv
=== PAUSE TestFunctional/parallel/DockerEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/DockerEnv
functional_test.go:459: only validate docker env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/DockerEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/PodmanEnv (0s)

                                                
                                                
=== RUN   TestFunctional/parallel/PodmanEnv
=== PAUSE TestFunctional/parallel/PodmanEnv

                                                
                                                

                                                
                                                
=== CONT  TestFunctional/parallel/PodmanEnv
functional_test.go:546: only validate podman env with docker container runtime, currently testing crio
--- SKIP: TestFunctional/parallel/PodmanEnv (0.00s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/RunSecondTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/StartTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/StartTunnel (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/WaitService
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/WaitService (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessDirect
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessDirect (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDig (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DNSResolutionByDscacheutil (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/AccessThroughDNS (0.01s)

                                                
                                    
x
+
TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                                
=== RUN   TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel
functional_test_tunnel_test.go:90: password required to execute 'route', skipping testTunnel: exit status 1
--- SKIP: TestFunctional/parallel/TunnelCmd/serial/DeleteTunnel (0.01s)

                                                
                                    
x
+
TestGvisorAddon (0s)

                                                
                                                
=== RUN   TestGvisorAddon
gvisor_addon_test.go:34: skipping test because --gvisor=false
--- SKIP: TestGvisorAddon (0.00s)

                                                
                                    
x
+
TestImageBuild (0s)

                                                
                                                
=== RUN   TestImageBuild
image_test.go:33: 
--- SKIP: TestImageBuild (0.00s)

                                                
                                    
x
+
TestKicCustomNetwork (0s)

                                                
                                                
=== RUN   TestKicCustomNetwork
kic_custom_network_test.go:34: only runs with docker driver
--- SKIP: TestKicCustomNetwork (0.00s)

                                                
                                    
x
+
TestKicExistingNetwork (0s)

                                                
                                                
=== RUN   TestKicExistingNetwork
kic_custom_network_test.go:73: only runs with docker driver
--- SKIP: TestKicExistingNetwork (0.00s)

                                                
                                    
x
+
TestKicCustomSubnet (0s)

                                                
                                                
=== RUN   TestKicCustomSubnet
kic_custom_network_test.go:102: only runs with docker/podman driver
--- SKIP: TestKicCustomSubnet (0.00s)

                                                
                                    
x
+
TestKicStaticIP (0s)

                                                
                                                
=== RUN   TestKicStaticIP
kic_custom_network_test.go:123: only run with docker/podman driver
--- SKIP: TestKicStaticIP (0.00s)

                                                
                                    
x
+
TestChangeNoneUser (0s)

                                                
                                                
=== RUN   TestChangeNoneUser
none_test.go:38: Test requires none driver and SUDO_USER env to not be empty
--- SKIP: TestChangeNoneUser (0.00s)

                                                
                                    
x
+
TestScheduledStopWindows (0s)

                                                
                                                
=== RUN   TestScheduledStopWindows
scheduled_stop_test.go:42: test only runs on windows
--- SKIP: TestScheduledStopWindows (0.00s)

                                                
                                    
x
+
TestSkaffold (0s)

                                                
                                                
=== RUN   TestSkaffold
skaffold_test.go:45: skaffold requires docker-env, currently testing crio container runtime
--- SKIP: TestSkaffold (0.00s)

                                                
                                    
x
+
TestInsufficientStorage (0s)

                                                
                                                
=== RUN   TestInsufficientStorage
status_test.go:38: only runs with docker driver
--- SKIP: TestInsufficientStorage (0.00s)

                                                
                                    
x
+
TestMissingContainerUpgrade (0s)

                                                
                                                
=== RUN   TestMissingContainerUpgrade
version_upgrade_test.go:284: This test is only for Docker
--- SKIP: TestMissingContainerUpgrade (0.00s)

                                                
                                    
x
+
TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                                
=== RUN   TestStartStop/group/disable-driver-mounts
=== PAUSE TestStartStop/group/disable-driver-mounts

                                                
                                                

                                                
                                                
=== CONT  TestStartStop/group/disable-driver-mounts
start_stop_delete_test.go:103: skipping TestStartStop/group/disable-driver-mounts - only runs on virtualbox
helpers_test.go:175: Cleaning up "disable-driver-mounts-993602" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p disable-driver-mounts-993602
--- SKIP: TestStartStop/group/disable-driver-mounts (0.16s)

                                                
                                    
x
+
TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/kubenet
net_test.go:93: Skipping the test as crio container runtimes requires CNI
panic.go:626: 
----------------------- debugLogs start: kubenet-058224 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "kubenet-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "kubenet-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "kubenet-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "kubenet-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "kubenet-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "kubenet-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "kubenet-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "kubenet-058224" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "kubenet-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "kubenet-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "kubenet-058224" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters:
- cluster:
certificate-authority: /home/jenkins/minikube-integration/18384-942544/.minikube/ca.crt
extensions:
- extension:
last-update: Thu, 14 Mar 2024 19:11:04 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: cluster_info
server: https://192.168.72.72:8443
name: pause-812876
contexts:
- context:
cluster: pause-812876
extensions:
- extension:
last-update: Thu, 14 Mar 2024 19:11:04 UTC
provider: minikube.sigs.k8s.io
version: v1.32.0
name: context_info
namespace: default
user: pause-812876
name: pause-812876
current-context: pause-812876
kind: Config
preferences: {}
users:
- name: pause-812876
user:
client-certificate: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/pause-812876/client.crt
client-key: /home/jenkins/minikube-integration/18384-942544/.minikube/profiles/pause-812876/client.key

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: kubenet-058224

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "kubenet-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p kubenet-058224"

                                                
                                                
----------------------- debugLogs end: kubenet-058224 [took: 3.304109767s] --------------------------------
helpers_test.go:175: Cleaning up "kubenet-058224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p kubenet-058224
--- SKIP: TestNetworkPlugins/group/kubenet (3.45s)

                                                
                                    
x
+
TestNetworkPlugins/group/cilium (4.19s)

                                                
                                                
=== RUN   TestNetworkPlugins/group/cilium
net_test.go:102: Skipping the test as it's interfering with other tests and is outdated
panic.go:626: 
----------------------- debugLogs start: cilium-058224 [pass: true] --------------------------------
>>> netcat: nslookup kubernetes.default:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> netcat: nslookup debug kubernetes.default a-records:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> netcat: dig search kubernetes.default:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local udp/53:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> netcat: dig @10.96.0.10 kubernetes.default.svc.cluster.local tcp/53:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 udp/53:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> netcat: nc 10.96.0.10 tcp/53:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> netcat: /etc/nsswitch.conf:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> netcat: /etc/hosts:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> netcat: /etc/resolv.conf:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> host: /etc/nsswitch.conf:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/hosts:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/resolv.conf:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> k8s: nodes, services, endpoints, daemon sets, deployments and pods, :
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> host: crictl pods:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: crictl containers:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> k8s: describe netcat deployment:
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe netcat pod(s):
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: netcat logs:
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns deployment:
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe coredns pods:
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: coredns logs:
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe api server pod(s):
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: api server logs:
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> host: /etc/cni:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: ip a s:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: ip r s:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: iptables-save:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: iptables table nat:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> k8s: describe cilium daemon set pod(s):
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (current):
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium daemon set container(s) logs (previous):
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> k8s: describe cilium deployment pod(s):
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (current):
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: cilium deployment container(s) logs (previous):
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy daemon set:
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: describe kube-proxy pod(s):
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> k8s: kube-proxy logs:
error: context "cilium-058224" does not exist

                                                
                                                

                                                
                                                
>>> host: kubelet daemon status:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: kubelet daemon config:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> k8s: kubelet logs:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/kubernetes/kubelet.conf:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: /var/lib/kubelet/config.yaml:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> k8s: kubectl config:
apiVersion: v1
clusters: null
contexts: null
current-context: ""
kind: Config
preferences: {}
users: null

                                                
                                                

                                                
                                                
>>> k8s: cms:
Error in configuration: context was not found for specified context: cilium-058224

                                                
                                                

                                                
                                                
>>> host: docker daemon status:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: docker daemon config:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/docker/daemon.json:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: docker system info:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon status:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: cri-docker daemon config:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/systemd/system/cri-docker.service.d/10-cni.conf:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: /usr/lib/systemd/system/cri-docker.service:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: cri-dockerd version:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: containerd daemon status:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: containerd daemon config:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: /lib/systemd/system/containerd.service:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/containerd/config.toml:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: containerd config dump:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: crio daemon status:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: crio daemon config:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: /etc/crio:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                

                                                
                                                
>>> host: crio config:
* Profile "cilium-058224" not found. Run "minikube profile list" to view all profiles.
To start a cluster, run: "minikube start -p cilium-058224"

                                                
                                                
----------------------- debugLogs end: cilium-058224 [took: 3.989773015s] --------------------------------
helpers_test.go:175: Cleaning up "cilium-058224" profile ...
helpers_test.go:178: (dbg) Run:  out/minikube-linux-amd64 delete -p cilium-058224
--- SKIP: TestNetworkPlugins/group/cilium (4.19s)

                                                
                                    
Copied to clipboard